Psychology encompasses a wide range of subject areas,
including social, clinical and developmental psychology, cognitive psychology
and neuroscience. The costs of doing different types of psychology vary hugely.
If you just want to see how people remember different types of material, for
instance, or test children's understanding of numerosity, this can be done at very
little cost. For most of the psychology I did as an undergraduate, data
collection did not involve complex equipment, and data analysis was pretty
straightforward - certainly well within the capabilities of a modern desktop
computer. The main cost for a research proposal in this area would be for staff
to do data collection and analysis. Neuroscience, however, is a different
matter. Most kinds of brain imaging require not only expensive equipment, but
also a building to house it and staff to maintain it, and all or part of these
costs will be passed on to researchers. Furthermore, data analysis is usually
highly technical and complex, and can take weeks, or even months, rather than
hours. A project that involves neuroimaging will typically cost orders of
magnitude more than other kinds of psychological research.
In academic research, money follows money. This is quite
explicit in funding systems that reward an institution in proportion to their
research income. This makes sense: an institution that is doing costly research
needs funding to support the infrastructure for that research. The problem is
that the money, rather than the research, can become the indicator of success. Hiring
committees will scrutinise CVs for evidence of ability to bring in large
grants. My guess is that, if choosing between one candidate with strong
publications and modest grant income vs. another with less influential
publications and large grant income, many would favour the latter.
Universities, after all, have to survive in a tough financial climate, and so
we are all exhorted to go after large grants to help shore up our institution's
income.
Some Universities have even taken to firing people who don't bring in
the expected income. This means that cheap cost-effective research in
traditional psychological areas will be devalued relative to more expensive
neuroimaging.
I have no quarrel, in principle, with psychologists doing
neuroimaging studies - some of my best friends are neuroimagers - and it is important that if good science is to be done in
this area that it should be properly funded. I am uneasy, though, about an
unintended consequence of the enthusiasm for neuroimaging, which is that it has
led to a devaluation of the other kinds of psychological research. I've been
reading
Thinking Fast and Slow,
by Daniel Kahneman, a psychologist who has the rare distinction of
being a Nobel Laureate. This is just one example of a psychologist who has made major advances without using brain scanners. I couldn't help thinking that Kahneman would not fare
well in the current academic climate, because his experiments were simple,
elegant ... and inexpensive.
I've
suggested previously that systems of academic rewards
need to be rejigged to take into account not just research income and
publication outputs, but the relationship between the two. Of course, some
kinds of research require big bucks, but large-scale grants are not always
cost-effective. And on the other side of the coin, there are people who do
excellent, influential work on a small budget.
I thought I'd see if it might be possible to get some hard
data on how this works in practice. I used data for Psychology Departments from
the last Research Assessment Exercise (RAE), from
this website, and matched
this up against citation counts for publications that came out in the same time
period (2000-2007) from
Web of Knowledge. The latter is a bit tricky, and I'm
aware that figures may contain inaccuracies, as I had to search by address,
using the name of the institution coupled with the words Psychology and UK. This will miss articles that don't have these words in the address. Also when double-checking the numbers, I found that for a search by address, results can fluctuate from one occasion to the next. For these reasons, I'd urge readers to treat the results with caution, and
I won't refer to institutions by name. Note too that though I restrict consideration to articles between 2000-2007, the citations extend
beyond the period when the RAE was completed. Web of Knowledge helpfully gives
you an
H-index for the institution if you ask for a citation report, and this
is what I report here, as it is more stable across repeated searches than the citation count. Figure 1 shows how research income for a department
relates to its H-index, just for those institutions deemed research active,
which I defined as having a research income of at least £500K over the reporting
period. The overall RAE rating is colour-coded into bandings, and the symbol denotes
whether or not the departmental submission mentions neuroimaging as an
important part of its work.
|
Data from RAE and Web of Knowledge: treat with caution! |
Several features are seen in these data, and most are
unsurprising:
- Research income and H-index are positively correlated, r =
.74 (95%CI .59-.84) as we would expect. Both variables are correlated with the
number of staff entered in the RAE, but the correlation between them remains
healthy when this factor is partialled out, r = .61 (95%CI .40-.76).
- Institutions coded as doing neuroimaging have bigger grants: after taking into account differences in number of staff, the mean income
for departments with neuroimaging was £7,428K and for those without it was
£3,889K (difference significant at p = .01).
- Both research income and H-index are predictive of RAE
rankings: the correlations are .68 (95% CI .50-.80) for research income and .79
(95% CI .66-.87) for H-index, and together they account for 80% of the variance
in rankings. We would not expect perfect prediction, given that the RAE committee
went beyond metrics to assess aspects of research quality not
reflected in citations or income. And in addition, it must be noted that the
citations counted here are for all researchers at a departmental address, not
just those entered in the RAE.
A point of concern to me in these data, though, is the wide
spread in H-index seen for those institutions with the highest levels of grant
income. If these numbers are accurate, some departments are using their
substantial income to do influential work, while others seem to achieve no more
than other departments with much less funding. There may be reasonable
explanations for this - for instance, a large tranche of funding may have been
awarded in the RAE period but not had time to percolate through to
publications. But nevertheless, it adds to my concern that we may
be rewarding those who chase big grants without paying sufficient attention to
what they do with the funding when they get it.
What, if anything, should we do about this? I've toyed in
the past with the idea of a cost-efficiency metric (e.g. citations divided by
grant income), but this would not work as a basis for allocating funds, because
some types of research are intrinsically more expensive than others. In
addition, it is difficult to get research funding, and success in this arena is
in itself an indicator that the researchers have impressed a tough committee of
their peers. So, yes, it makes sense to treat level of research funding as one indicator
of an institution's research excellence when rating departments to determine
who gets funding. My argument is simply that we should be aware of the
unintended consequences if we rely too heavily on this metric. It would be nice
to see some kind of indicator of cost-effectiveness included in ratings of
departments alongside the more traditional metrics. In times of financial
stringency, it is particularly short-sighted to discount the contribution of
researchers who are able to do influential work with relatively scant
resources.