Saturday, 7 April 2018

Should research funding be allocated at random?


Earlier this week, a group of early-career scientists had an opportunity to quiz Jim Smith, Director of Science at the Wellcome Trust. The ECRs were attending a course on Advanced Methods for Reproducible Science that I ran with Chris Chambers and Marcus Munafo, and Jim kindly agreed to come along for an after-dinner session which started in a lecture room and ended in the bar.

Among other things, he was asked about the demand for small-scale funding. In some areas of science, a grant of £20-30K could be very useful in enabling a scientist to employ an assistant to gather data, or to buy a key piece of equipment. Jim pointed out that from a funder’s perspective, small grants are not an attractive proposition, because the costs of administering them (finding reviewers, running grant panels, etc.) are high relative to the benefits they achieve. And it’s likely that there will be far more applicants for small grants.

This made me wonder whether we might retain the benefits of small grants by dispensing with the bureaucracy. A committee would have to scrutinise proposals to make sure that the proposal met the funder’s remit, and were of high methodological quality; provided that were so, then the proposal could be entered into a pool, with winners selected at random.

Implicit in this proposal is the idea that it isn’t possible to rank applications reliably. If a lottery approach meant we ended up funding weak research and denying funds to excellent project, this would clearly be a bad thing. But research rankings by committee and/or peer review is notoriously unreliable, and it is hard to compare proposals that span a range of disciplines. Many people feel that funding is already a lottery, albeit an unintentional one, because the same grant that succeeds in one round may be rejected in the next. Interviews are problematic because they mean that a major decision – fund or not – is decided on the basis of a short sample of a candidate’s behaviour, and that people with great proposals but poor social skills may be turned down in favour of glib individuals who can sell themselves more effectively.

I thought it would be interesting to float this idea in a Twitter poll.  I anticipated that enthusiasm for the lottery approach might be higher among those who had been unsuccessful in getting funding, but in fact, the final result was pretty similar, regardless of funding status of the respondent: most approved of a lottery approach, with 66% in favour and 34% against.


As is often the way with Twitter, the poll encouraged people to point me to an existing literature I had not been aware of. In particular, last year, Mark Humphries (@markdhumphries) made a compelling argument for randomness in funding allocations, focusing on the expense and unreliability of current peer review systems. Hilda Bastian and others pointed me to work by Shahar Avin , who has done a detailed scholarly analysis of policy implications for random funding – in the course of which he mentions three funding systems where this has been tried.  In another manuscript, Avin presented a computer simulation to compare explicit random allocation with peer review. The code is openly available, and the results from the scenarios modelled by Avin are provocative in supporting the case for including an element of randomness in funding. (Readers may also be interested in this simulation of the effect of luck on a meritocracy, which is not specific to research funding but has some relevance.) Others pointed to even more radical proposals, such as collective allocation of science funding, giving all researchers a limited amount of funding, or yoking risk to reward.

Having considered these sources and a range of additional comments on the proposal, I think it does look as if it would be worth a funder such as Wellcome Trust doing a trial of random allocation of funding for proposals meeting a quality criterion. As noted by Dylan Wiliam, the key question is whether peer review does indeed select the best proposals. To test this, those who applied for Seed Funding could be randomly directed to either stream A, where proposals undergo conventional evaluation by committee, or stream B, where the committee engages in a relatively light touch process to decide whether to enter the proposal into a lottery, which then decides its fate. Streams A and B could each have the same budget, and their outcomes could be compared a few years later.

One reason I’d recommend this approach specifically for Seed Funding is because of the disproportionate administrative burden for small grants. There would, in principle, be no reason for not extending the idea to larger grants, but I suspect that the more money is at stake, the greater will be the reluctance to include an explicit element of chance in the funding decision. And, as Shahar Avin noted, very expensive projects need long-term support, which makes a lottery approach unsuitable.

Some of those responding to the poll noted potential drawbacks. Hazel Phillips suggested that random assignment would make it harder to include strategic concerns, such as career stage or importance of topic. But if the funder had particular priorities of this kind, they could create a separate pool for a subset of proposals that met additional criteria and that would be given a higher chance of funding. Another concern was gaming by institutions or individuals submitting numerous proposals in scattergun fashion. Again, I don’t see this as a serious objection, as (a) use of an initial quality triage would weed out proposals that were poorly motivated and (b) applicants could be limited to one proposal per round. Most of the other comments that were critical expressed concerns about the initial triage: how would the threshold for entry into the pool be set?  A triage stage may look as if one is just pushing back the decision-making problem to an earlier step, but in practice, it would be feasible to develop transparent criteria for determining which proposals didn’t get into the pool: some have methodological limitations which mean they couldn’t give a coherent answer to the question they pose; some research questions are ill-formed; others have already been answered adequately -  this blogpost by Paul Glasziou and Iain Chalmers makes a good start in identifying characteristics of research proposals that should not be considered for funding.

My view is that there are advantages for the lottery approach over and above the resource issues. First, Avin’s analysis concludes that reliance on peer review leads to a bias against risk-taking, which can mean that novelty and creativity are discouraged. Second, once a proposal was in the pool, there would be no scope for bias against researchers in terms of gender or race – something that can be a particular concern when interviews are used to assess. Third, the impact on the science community is also worth considering. Far less grief would be engendered by a grant rejection if you knew it was that you were unlucky, rather than that you were judged to be wanting. Furthermore, as noted by Marina Papoutsi, some institutions evaluate their staff in terms of how much grant income they bring in – a process that ignores the strong element of chance that already affects funding decisions. A lottery approach, where the randomness is explicit, would put paid to such practices.



-->

3 comments:

  1. This need not be an "either / or" situation. You could set aside a fraction of funds -- to be awarded by lottery (http://neurodojo.blogspot.com/2008/01/rethinking-granting-part-2.html). Such awards might have disproportionately big effects.

    ReplyDelete
  2. One problem that I see with random allocation is that in the long run it can be very unfair: if you are very lucky, you can get all your projects funded, if you are very unlucky, you never get funded. In our experiments, we usually use permutation rather than randomisation of items precisely in order to avoid such unequal distributions.
    Another possibility would be to enter every application into a waiting-list, so that each gets funded in its own time. There are of course also drawbacks to that, in terms of timeliness.
    A simpler measure would be to prevent the lucky ones to re-enter the draw before a number of years. This would solve the problem with the lucky ones, but not necessarily with the unlucky ones.
    There may be better solutions. At any rate, we need to think about pseudo-random rather than random allocation.

    ReplyDelete
  3. I also think that by focusing on the random aspects of peer-review, we may lose its benefits, in terms of constructive criticism and improvement of projects (I know you've written about this too, Dorothy). So you may want to have a hybrid system with a 1st stage of peer-review aiming to run a basic quality control AND improve the projects as much as possible, then a 2nd stage in which all suitably revised projects enter the draw.

    ReplyDelete