Saturday, 29 January 2011

Orwellian prize for journalistic misrepresentation: an update


I've been re-reading George Orwell's Nineteen Eighty-Four to get me in the mood for looking at nominations for the Orwellian prize of journalistic misrepresentation. Bizarrely, it cheered me up. I last read it in 1968, and it made me nervous. But here we are in Airstrip One in 2010: we may be heading for mass unemployment, dismantlement of the NHS, the BBC and the Universities, we may be getting increasingly uncomfortable with the state's attitude to controlling dissent, but compared to what Winston Smith went through, this is paradise. New technologies are being used to oppress people the world over, but the internet has also emerged as a tool for fighting oppression. I can expostulate on my blog about things that concern me without the thought police carrying me off.

But, back to the prize. Why, I've been asked, did I choose that name? In Orwell's dystopian world, the press is used to achieve control, such that "nominally free news media are required to present 'balanced' coverage, in which every 'truth' is immediately neutered by an equal and opposite one. Every day public opinion is the target of rewritten history, official amnesia and outright lying, all of which is benevolently termed 'spin', as if it were no more harmful than a ride on a merry-go-round." This captured what I felt when I read the coverage of certain scientific discoveries in the press. I knew that there was an Orwell Prize to celebrate the best of British political journalism; I saw the Orwellian Prize as the inverse – a way of noting the worst of science journalism.

I wasn't so concerned by the fact that journalists sometimes make mistakes. That's inevitable when writing under time pressure on subjects where one does not have expertise. I accept too that journalists, and their editors, are not required to be neutral, and are entitled to promulgate their opinions. I'm concerned, however, when mistakes have implications for people who are vulnerable, who might be misled into adopting an ineffective treatment, or shunning an effective one, by a prominent newspaper article. Or, in the case of major environmental issues, a piece on climate change or the impact of industrial practices might sway public opinion in a direction opposite to that held by informed scientists, and risk making our planet less habitable for future generations.  The aspects of scientific writing that particularly disturb me are when science is misrepresented, and that misrepresentation is done thoughtlessly or even knowingly: sometimes for political ends, but more often to get a 'good story' that will have an eye-catching headline and so sell more newspapers.  The MMR story is the most vivid illustration I'm aware of. It's bad when hard-pressed readers are induced to spend money on ineffective treatments; it's worse when children risk disease or even death as a consequence of irresponsible journalism.

In a brilliant post last September, Martin Robbins lampooned the formulaic treatment of science stories that is so often seen in the media. His was a generic account, but in specific areas of science, one can provide even more detail. Thus, for every new advance in neuroscience or genetics, there seems to be one of two possible conclusions: either (a) if confirmed, the discovery will make it possible treat dementia/dyslexia/autism/Parkinson's disease/depression in future, or more commonly (b), the discovery will lead to better diagnosis of dementia/dyslexia/autism/Parkinson's disease/depression.  Yet most of the research in this area is a long way from translation into clinical practice. We are still learning to interpret tools such as brain imaging and genetic analysis, and the more we do, the more complex it becomes. Researchers become rightly excited when they find a neural or genetic correlate of a disease, which could help our understanding of underlying causal mechanisms. This is a vital step in the direction toward effective clinical procedures, but it's usually unrealistic to imagine these will be available soon. Yet in their desire to appeal to human interest, journalists will exert pressure to make more of a story than is justified, by focusing more on hypothetical rather than actual findings. The impetus behind the Orwellian prize is described in my earlier post, where I dissected an article describing a positive impact of fish oil on concentration in children with attention deficit hyperactivity disorder (ADHD). The study behind the story did not use fish oil, did not include children with ADHD, and did not find any behavioural benefits in the children who were given fatty acids. This article epitomised everything that made me angry about reporting of science in my area, with errors so numerous it was hard to believe they were accidental. I therefore threw down a challenge to others to see if they could find equally worrying examples. As it happens, that article was subsequently the focus of a scathing attack by Ben Goldacre, and was taken off the Observer's website.

Now, one of the things I love about blogging is its interactive nature. A number of commentators made points that made me think more. None of them were apologists for poor reporting, but they noted that journalists are not always to blame for an overhyped article. For a start, they have no control over headlines, which are  typically written by a sub-editor whose job it is to attract the reader's attention. They are also fed information by institutional press officers who may put a spin on a story in the hope that the media would pick it up. And researchers themselves are not immune from wanting their day of glory and being willing to 'accentuate the positive' rather than adopting the cautious, balanced approach that characterises a good scientist.  I have to say that, although I knew this could happen, until I started looking at candidates for the Orwellian, I had always thought that it was only a handful of maverick scientists with personality disorders who would behave in this way. I'd just assumed that for most scientists their reputation with colleagues would be far more important than a brief burst of media attention. I'd also reckoned that press offices would jealously guard the reputation of their institution for impeccably accurate science. But this proved to be naïve. Fame is seductive, and many will compromise their standards for their spot in the limelight.

Since throwing down my challenge, I've had two nominations for the Orwellian. The first was from Cambridge neuroscientist Jon Simons, who true to the spirit of my original post, scored up his submission according to the system I'd proposed, which involved giving points for each statement that was inaccurate when checked against the original source article. The nominated newspaper report was by a Washington Post staff writer and appeared on 9th September under the headline "Scientists can scan brains for maturity, potentially gauging child development". Simons's computations gave it a total of 16 points, putting it on level pegging with the ADHD article. One needs only to look at Figure 1 from the original article, reproduced below, to see one reason why he was exasperated. The prediction from the "functional connectivity Maturation Index" (fcMI), while statistically significant, is far from precise, because of the variation around the average level for each age. Consequently, there is a fair amount of overlap in the range of scores seen for adults and children. If a normal adult can get a brain maturity index of a 8 year old, and a normal 8-year-old can get a maturity index equivalent to an adult, it is questionable just how useful the index would be at detecting abnormality. Also, the regression equation was computed on the combined child and adult data, and nowhere in the paper are data presented on the accuracy of prediction of chronological age from brain measures just within the group of children. As an aside, our research group has tried similar things with the more low-tech methodology of event-related potentials, and we find it is relatively easy to discriminate child brains from adult brains, but not easy to distinguish between a 6-year-old and a 10-year-old (Bishop et al, 2007).
Figure 1 from Dosenbach et al, 2010

I would not be happy, however, giving the Orwellian award to the brain maturity piece, because the journalist does not seem to be at fault. Much of the newspaper article appears to be based on a press release from Oregon Health and Science University.  And furthermore, the Science article reporting the findings is entitled Prediction of individual brain maturity using fMRI and concludes that the method "could one day provide useful information to aid in the screening, diagnosis, and prognosis of individuals with disordered brain function."  I could not find any source for the claim in the newspaper article that the method might tell us "whether teenageers are grown-up enough to be treated as adults", though I did find a Neurolaw blogpost that could be the source of this idea: it stated "in the courtroom, this work could prove useful in determining whether children or adults should be culpable for their actions based on the maturation of the brains" (!).  Overall, I felt that the majority of mistakes in this piece were either misinterpretations that were encouraged by the journal article or press release, or involved extrapolations beyond the data that originated with the authors. And, unlike some other articles focused on developmental disorders, I did not feel this one had much potential for harm.


Another contender for the prize came from Australia, where conservation ecologist Corey Bradshaw was exercised at the way his work on frogs had been misrepresented. In a paper in Conservation Biology entitled Eating frogs to extinction, he and his colleagues called for certification of the trade in frogs' legs, which has the potential to have significant impact on frog populations. I won't attempt to summarise his arguments, which are eloquently stated in his own blogpost. All science journalists should read this – not just because it is witty, but also because it gives some insight into just how angry scientists get when they try hard to explain the science to a journalist who then gets it wrong . But this article, too, didn't seem to merit the prize. Quite apart from the fact it's really too early for the prize, dating back to 2009, there's also a sense of breaking a butterfly on a wheel. The article in question was not in a major newspaper; it was rather on a blog called SlashFood. Admittedly, it's an award-winning blog, described by Time magazine as "a site for people who are serious about what they put in their bodies". But this was clearly a journalist coming at the frogs' legs story from the culinary and not the scientific angle. And as Bradshaw himself says of the journalist, " Now, in all fairness, I think she was trying to do well…" Also, it's hard to see that the article did much harm, beyond raising the blood pressure of a distinguished conservation ecologist to dangerous levels.  Replete with inaccuracies as it was, the overall impact would have been to make environmentally conscious people think twice about eating frogs' legs. Incidentally, journalists at the Guardian will be pleased to hear that Jon Henley's coverage of the same story got a ringing endorsement from Bradshaw.

These were the only nominations I received for the prize, and so I have to announce that it will not be awarded this year, although the two nominators will receive an alcoholic token of my appreciation for their efforts. Although I could give the award to the original Observer article, this does not seem justified given that the newspaper withdrew the piece when it became aware of the scientific criticism.

I will accept  nominations for 2011, and am interested in receiving them even if they don't meet all criteria: I am fascinated by interactions between the media and scientists, in finding out more about what scientists do to irritate journalists, and vice versa. If we are going to improve science reporting, we need to understand one another. I hope that the lack of a serious contender for 2010 is telling us something about improving standards in science journalism - but maybe readers know better.

Wednesday, 12 January 2011

What works for women: some useful links

This is a work in progress! Feel free to suggest additions

Resources from Virginia Valian, including details of her book, "Why So Slow", a webcast and other resources

 
Athene Donald’s blog

Jenny Rohn's blog

Blog on "becoming a domestic and laboratory goddess"

Demonstrations of schemata (see Valian for context)
 

STRIDE Faculty recruitment workshop readings
STRIDE faculty recruitment, other resources



Saturday, 8 January 2011

Should we ration research grant applications?

Researchers can never have enough funds.  Talented people with bright ideas frequently fail to get funded, leading to low morale in academia.  In the current financial climate, it is easy to put all the blame for this on government.  But a recent consultation document by the Economic and Social Research Council (ESRC) shows that’s not the full story.  Between 2007 and 2009, the number of responsive mode grant applications rose by 39%.  A similar picture is seen for other UK research councils.  If more people apply for the same pot of money, it is inevitable that a smaller proportion of applications will be funded: In the case of ESRC, success rates have gone from 30% to 16%.  Reductions of research council budgets in real terms means things can only get worse. ESRC reckons that the status quo is not an option: once success rates fall this low, it ceases to be worthwhile for researchers to submit grant proposals. As well as the costs to applicants in time and effort, there are also financial costs to higher-education institutions in administering grant applications, and for ESRC administrative staff.  The peer review process also starts to break down: on the one hand, it becomes difficult to find enough reviewers to handle the mounting tide of proposals, and on the other, reviewers become reluctant to say anything negative at all about a proposal, as only those with a uniform set of glowing reviews stand any chance.

One of the UK Research Councils, the Engineering and Physical Sciences Research Council (EPSRC) has put new measures in place to attempt to reduce the flood of applications. The other research councils are discussing how to do this, and the ESRC is to be congratulated for taking soundings from the academic community about possible ways forward.

Their document, however, creates an odd mental state in academic readers. We’ve all been told for years that getting grant funding is a Good Thing.  Individuals who can bring in the funds will be rewarded with promotion, tenure, and glittering prizes.  Universities pride themselves on their grant income, which is major factor in higher education rankings. Now, however, we are told that applying for grants is a Bad Thing, which the academic community needs to work with ESRC to cut back, with statements such as:

•    (ESRC has) an ambitious target of halving the number of applications submitted through its standard grants scheme by 2014
•    greater self regulation has the potential to significantly reduce the volume of applications submitted by institutions
•    (self-regulation will) probably not go far enough achieve the 50% reduction that the Council is seeking to deliver

The five options outlined by ESRC (copied verbatim below) are:
•    Researcher Sanctions: This involves limiting the number of proposals from individual researchers who consistently fail to submit applications that reach an agreed quality threshold;
•    Institutional Sanctions:  This involves introducing sanctions for HEIs whose applications fail to meet a certain success rate and/or quality threshold;
•    Institutional Quotas for ‘managed mode’ schemes.  This involves the introduction  of institutional quotas for certain schemes (e.g. early career researcher schemes, Large Grants/Centres, Professorial Fellowships).;
•    Institutional quotas for all schemes: This involves responsive as well as managed mode schemes; 
•    Charging for applications. Levying an agreed fee for institutions submitting applications, with the option that this levy is redeemable if the application is successful.

The pros and cons of each of these is discussed, and readers are asked to comment.

My response to ESRC is that they are looking for solutions in the wrong place. To fix the problem, they need to change the basic structure of university funding so that institutions and individuals are no longer assessed on amount of research funding, but rather on an output/input function, i.e. how much bang do you get for your buck.

I have argued in a previous blog that it makes no sense to reward people simply for securing large amounts of funding. Currently, a person who secures a £500K grant which leads to two publications in lower-ranking journals will be given more credit than one who generates five high-ranking publications over the same period with a £50K grant. Clearly, some sorts of research are much more expensive than others; the problem is that the current system discourages people from undertaking inexpensive research.  In my own field of psychology, there are cases of people who have published an impressive body of work based largely on student projects: they do not, however, get much appreciation from their institutions. Meanwhile, it has become almost mandatory for psychology  grant applications to include an expensive brain scanning component, even if this adds nothing to the scientific value of the research.  The introduction of Full Economic Costing (FEC) has added to the problem, by introducing incentives for researchers to add collaborators to their proposals, as this will bump up the cost of the grant.  In short, the combination of the RAE and FEC does the opposite of encouraging cost-effectiveness in grant applications  - it makes people focus solely on cost, the higher the better.

The same incentives have not only encouraged an anti-thrift mentality in higher education institutions, they have also changed expectations about the numbers of grants that academics should hold.  Doing research takes time, and applicants are typically asked to quantify this in grant applications in terms of hours per week spent on the research. As far as I know, nobody ever adds up the estimated hours of research time for a person  holding grants by different bodies. I suspect that there are cases where, if one were to total up estimated time across all a researcher's grants, it would exceed the number of hours in a week. This is particularly true for grant-holders in lectureships, who presumably are expected to spend some time on teaching activities.   The system as it stands will encourage an academic to apply for 5 grants,  spending 1 hr per week on each of them, than to apply for a single grant, on which they propose to spend 5 hr per week. Yet I’d bet that the quality of research would be better in the latter case, because high quality research takes time and thought.  Over-commitment is encouraged by the current system, yet causes stress and waste. Many research-active academics are overwhelmed by backlogs of research data, because they feel compelled to submit more grant applications rather than writing up what they have done.

I have three suggested solutions to the current crisis. The first involves such a radical change to funding structures of Universities that it is unlikely to be implemented. The other two are both feasible:

1.    For Government: Ditch the current methods for allocating funding to Universities so that cost-effectiveness and thrifty use of research funds are rewarded, rather than punished.
2.    For research councils: When evaluating research quality, be more focused on track record of outputs relative to income, so that funding is steered toward those who have demonstrated good cost-effectiveness. Given differences in costs and time-scale of outputs across disciplines, a generic metric would be unworkable, but just by asking grants panels to look at this would be a step forward. Obviously this would not apply to new investigators.
3.    For Universities:  Scrutinise all the research grants held by individuals to ensure that amount of time specified for research activities is realistic, taking into account other job demands.  A person would be debarred from putting in a grant proposal if they were over a limit in terms of hr per week already allocated to research.

If all three could be implemented, we could achieve a situation where the pressure on research councils is relieved, academics would be able to do good research with less stress on continually applying for funds, and research quality would be enhanced.

P.S. 11th January 2011
My congratulating ESRC for undertaking consultation with academics may have been premature. It seems very few academics knew about this, at least in my discipline of psychology. One who made enquiries at his institution discovered that the consultation document had gone to members of the Association of Research Managers and Administrators (whose website is the only place with a link to the document). In Oxford University, the recipient of the document circulated it to academics for their views, but I suspect in many places this did not happen. So, if rumour is to be believed, those of us who actually write the grants don't get asked about a major change in funding policy - just those who administer grant applications. Also noteworthy that the document was released on 20th December, just before Universities close for a long break.