Sunday, 15 July 2012

The devaluation of low-cost psychological research

Psychology encompasses a wide range of subject areas, including social, clinical and developmental psychology, cognitive psychology and neuroscience. The costs of doing different types of psychology vary hugely. If you just want to see how people remember different types of material, for instance, or test children's understanding of numerosity, this can be done at very little cost. For most of the psychology I did as an undergraduate, data collection did not involve complex equipment, and data analysis was pretty straightforward - certainly well within the capabilities of a modern desktop computer. The main cost for a research proposal in this area would be for staff to do data collection and analysis. Neuroscience, however, is a different matter. Most kinds of brain imaging require not only expensive equipment, but also a building to house it and staff to maintain it, and all or part of these costs will be passed on to researchers. Furthermore, data analysis is usually highly technical and complex, and can take weeks, or even months, rather than hours. A project that involves neuroimaging will typically cost orders of magnitude more than other kinds of psychological research.
In academic research, money follows money. This is quite explicit in funding systems that reward an institution in proportion to their research income. This makes sense: an institution that is doing costly research needs funding to support the infrastructure for that research. The problem is that the money, rather than the research, can become the indicator of success. Hiring committees will scrutinise CVs for evidence of ability to bring in large grants. My guess is that, if choosing between one candidate with strong publications and modest grant income vs. another with less influential publications and large grant income, many would favour the latter. Universities, after all, have to survive in a tough financial climate, and so we are all exhorted to go after large grants to help shore up our institution's income. Some Universities have even taken to firing people who don't bring in the expected income. This means that cheap cost-effective research in traditional psychological areas will be devalued relative to more expensive neuroimaging.
I have no quarrel, in principle, with psychologists doing neuroimaging studies - some of my best friends are neuroimagers -  and it is important that if good science is to be done in this area that it should be properly funded. I am uneasy, though, about an unintended consequence of the enthusiasm for neuroimaging, which is that it has led to a devaluation of the other kinds of psychological research. I've been reading Thinking Fast and Slow, by Daniel Kahneman, a psychologist who has the rare distinction of being a Nobel Laureate. This is just one example of a psychologist who has made major advances without using brain scanners. I couldn't help thinking that Kahneman would not fare well in the current academic climate, because his experiments were simple, elegant ... and inexpensive.
I've suggested previously that systems of academic rewards need to be rejigged to take into account not just research income and publication outputs, but the relationship between the two. Of course, some kinds of research require big bucks, but large-scale grants are not always cost-effective. And on the other side of the coin, there are people who do excellent, influential work on a small budget.
I thought I'd see if it might be possible to get some hard data on how this works in practice. I used data for Psychology Departments from the last Research Assessment Exercise (RAE), from this website, and matched this up against citation counts for publications that came out in the same time period (2000-2007) from Web of Knowledge. The latter is a bit tricky, and I'm aware that figures may contain inaccuracies, as I had to search by address, using the name of the institution coupled with the words Psychology and UK. This will miss articles that don't have these words in the address. Also when double-checking the numbers, I  found that for a search by address, results can fluctuate from one occasion to the next. For these reasons, I'd urge readers to treat the results with caution, and I won't refer to institutions by name. Note too that though I restrict consideration to articles between 2000-2007, the citations extend beyond the period when the RAE was completed. Web of Knowledge helpfully gives you an H-index for the institution if you ask for a citation report, and this is what I report here, as it is more stable across repeated searches than the citation count. Figure 1 shows how research income for a department relates to its H-index, just for those institutions deemed research active, which I defined as having a research income of at least £500K over the reporting period. The overall RAE rating is colour-coded into bandings, and the symbol denotes whether or not the departmental submission mentions neuroimaging as an important part of its work.
Data from RAE and Web of Knowledge: treat with caution!
Several features are seen in these data, and most are unsurprising:
  • Research income and H-index are positively correlated, r = .74 (95%CI .59-.84) as we would expect. Both variables are correlated with the number of staff entered in the RAE, but the correlation between them remains healthy when this factor is partialled out, r = .61 (95%CI .40-.76).
  • Institutions coded as doing neuroimaging have bigger grants: after taking into account differences in number of staff, the mean income for departments with neuroimaging was £7,428K and for those without it was £3,889K (difference significant at p = .01).
  • Both research income and H-index are predictive of RAE rankings: the correlations are .68 (95% CI .50-.80) for research income and .79 (95% CI .66-.87) for H-index, and together they account for 80% of the variance in rankings. We would not expect perfect prediction, given that the RAE committee went beyond metrics to assess aspects of research quality not reflected in citations or income. And in addition, it must be noted that the citations counted here are for all researchers at a departmental address, not just those entered in the RAE.
A point of concern to me in these data, though, is the wide spread in H-index seen for those institutions with the highest levels of grant income. If these numbers are accurate, some departments are using their substantial income to do influential work, while others seem to achieve no more than other departments with much less funding. There may be reasonable explanations for this - for instance, a large tranche of funding may have been awarded in the RAE period but not had time to percolate through to publications. But nevertheless, it adds to my concern that we may be rewarding those who chase big grants without paying sufficient attention to what they do with the funding when they get it.
What, if anything, should we do about this? I've toyed in the past with the idea of a cost-efficiency metric (e.g. citations divided by grant income), but this would not work as a basis for allocating funds, because some types of research are intrinsically more expensive than others. In addition, it is difficult to get research funding, and success in this arena is in itself an indicator that the researchers have impressed a tough committee of their peers. So, yes, it makes sense to treat level of research funding as one indicator of an institution's research excellence when rating departments to determine who gets funding. My argument is simply that we should be aware of the unintended consequences if we rely too heavily on this metric. It would be nice to see some kind of indicator of cost-effectiveness included in ratings of departments alongside the more traditional metrics. In times of financial stringency, it is particularly short-sighted to discount the contribution of researchers who are able to do influential work with relatively scant resources.


7 comments:

  1. I agree that it is very important to consider the cost effectiveness of research. Just a couple of further comments - as someone who does around 50% fMRI and 50% other trad psychology, I find that paying RA/postdoc salaries still makes up the VAST majority of the costs on the grants I apply for. fMRI adds 25-50% to the total, but does not double the cost of the research.

    Second, at the moment fMRI journals seem to have higher impact factors than non-fMRI journals. I don't know why this is and it isn't necessarily a good idea, but could that be a driver of the correlations above?

    ReplyDelete
    Replies
    1. Neuroimaging papers are cited at a much higher rate that behavioral work. Comparing IF across disciplines doesn't quite work because the numbers of researchers. It may be the case that there are just way more researchers out there looking to cite neuroimaging work in general.

      Delete
  2. I am writing a proposal at the moment for the Templeton Foundation (kind of like a US equivalent of Wellcome). They have quite elaborate budget sections in which the I must justify why what I am proposing is worth the money I am asking for, and why cheaper methods will not work. It is a bit of a pain, but I really like the way it makes it explicit that they are not going to waste their money. It should make it much easier for them to compare the value they are getting in different proposals. If all funding agencies were biased towards the cheapest method of making major advances, a lot more good science could be done.

    ReplyDelete
  3. Oh, and if funding agencies do their job, the funding they provide is break even for the University - i.e., they pay out the actual costs. Thus, the idea that funded research creates "income" for the University is misleading. If the University were a business, you would say that it increases gross sales, not net profit. Many businesses fail for not understanding the difference, and I often fear a large number of research universities are headed in the same direction.

    ReplyDelete
  4. The British Psychological Society and the Experimental Psychology Society of Great Britain are supposed to be working together to provide and analyse some data on recent psychology-related grants that contain an element of brain imaging work, in order to look at the impact on psychology of the expensiveness of brain imaging work. See Recommendation 3 at http://tinyurl.com/759ejya. This was recommended by the recent international benchmarking review of UK psychology: its report is at http://tinyurl.com/7vgev9l . Let's see what happens here.

    ReplyDelete
  5. Thanks Max - v. glad to see the issue being addressed.
    Thanks too Eric, but it's not the case that there are no benefits from high levels of funding. I should have made clear for nonUK readers, that our research assessment exercise determined levels of central funding that are allocated to universities. So if big grants get you a higher RAE rating, your institution benefits: see for instance http://www.timeshighereducation.co.uk/story.asp?storycode=405676

    ReplyDelete
  6. Neuroimaging papers are refered to at a much higher rate that behavioral work. Contrasting IF crosswise over orders doesn't exactly work in light of the fact that the quantities of scientists. The reality of the situation may prove that there are just way more analysts out there hoping to refer to neuroimaging work as a rule. dissertation topics

    ReplyDelete