Showing posts with label neuroscience. Show all posts
Showing posts with label neuroscience. Show all posts

Tuesday, 8 July 2014

Bishopblog catalogue (updated 8th July 2014)

Source: http://www.weblogcartoons.com/2008/11/23/ideas/

Those of you who follow this blog may have noticed a lack of thematic coherence. I write about whatever is exercising my mind at the time, which can range from technical aspects of statistics to the design of bathroom taps. I decided it might be helpful to introduce a bit of order into this chaotic melange, so here is a catalogue of posts by topic.

Language impairment, dyslexia and related disorders
The common childhood disorders that have been left out in the cold (1 Dec 2010) What's in a name? (18 Dec 2010) Neuroprognosis in dyslexia (22 Dec 2010) Where commercial and clinical interests collide: Auditory processing disorder (6 Mar 2011) Auditory processing disorder (30 Mar 2011) Special educational needs: will they be met by the Green paper proposals? (9 Apr 2011) Is poor parenting really to blame for children's school problems? (3 Jun 2011) Early intervention: what's not to like? (1 Sep 2011) Lies, damned lies and spin (15 Oct 2011) A message to the world (31 Oct 2011) Vitamins, genes and language (13 Nov 2011) Neuroscientific interventions for dyslexia: red flags (24 Feb 2012) Phonics screening: sense and sensibility (3 Apr 2012) What Chomsky doesn't get about child language (3 Sept 2012) Data from the phonics screen (1 Oct 2012) Auditory processing disorder: schisms and skirmishes (27 Oct 2012) High-impact journals (Action video games and dyslexia: critique) (10 Mar 2013) Overhyped genetic findings: the case of dyslexia (16 Jun 2013) The arcuate fasciculus and word learning (11 Aug 2013) Changing children's brains (17 Aug 2013) Raising awareness of language learning impairments (26 Sep 2013) Good and bad news on the phonics screen (5 Oct 2013) What is educational neuroscience? ( 25 Jan 2014) Parent talk and child language ( 17 Feb 2014) My thoughts on the dyslexia debate ( 20 Mar 2014)

Autism
Autism diagnosis in cultural context (16 May 2011) Are our ‘gold standard’ autism diagnostic instruments fit for purpose? (30 May 2011) How common is autism? (7 Jun 2011) Autism and hypersystematising parents (21 Jun 2011) An open letter to Baroness Susan Greenfield (4 Aug 2011) Susan Greenfield and autistic spectrum disorder: was she misrepresented? (12 Aug 2011) Psychoanalytic treatment for autism: Interviews with French analysts (23 Jan 2012) The ‘autism epidemic’ and diagnostic substitution (4 Jun 2012) How wishful thinking is damaging Peta's cause (9 June 2014)

Developmental disorders/paediatrics
The hidden cost of neglected tropical diseases (25 Nov 2010) The National Children's Study: a view from across the pond (25 Jun 2011) The kids are all right in daycare (14 Sep 2011) Moderate drinking in pregnancy: toxic or benign? (21 Nov 2012) Changing the landscape of psychiatric research (11 May 2014)

Genetics
Where does the myth of a gene for things like intelligence come from? (9 Sep 2010) Genes for optimism, dyslexia and obesity and other mythical beasts (10 Sep 2010) The X and Y of sex differences (11 May 2011) Review of How Genes Influence Behaviour (5 Jun 2011) Getting genetic effect sizes in perspective (20 Apr 2012) Moderate drinking in pregnancy: toxic or benign? (21 Nov 2012) Genes, brains and lateralisation (22 Dec 2012) Genetic variation and neuroimaging (11 Jan 2013) Have we become slower and dumber? (15 May 2013) Overhyped genetic findings: the case of dyslexia (16 Jun 2013)

Neuroscience
Neuroprognosis in dyslexia (22 Dec 2010) Brain scans show that… (11 Jun 2011)  Time for neuroimaging (and PNAS) to clean up its act (5 Mar 2012) Neuronal migration in language learning impairments (2 May 2012) Sharing of MRI datasets (6 May 2012) Genetic variation and neuroimaging (1 Jan 2013) The arcuate fasciculus and word learning (11 Aug 2013) Changing children's brains (17 Aug 2013) What is educational neuroscience? ( 25 Jan 2014) Changing the landscape of psychiatric research (11 May 2014)

Statistics
Book review: biography of Richard Doll (5 Jun 2010) Book review: the Invisible Gorilla (30 Jun 2010) The difference between p < .05 and a screening test (23 Jul 2010) Three ways to improve cognitive test scores without intervention (14 Aug 2010) A short nerdy post about the use of percentiles (13 Apr 2011) The joys of inventing data (5 Oct 2011) Getting genetic effect sizes in perspective (20 Apr 2012) Causal models of developmental disorders: the perils of correlational data (24 Jun 2012) Data from the phonics screen (1 Oct 2012)Moderate drinking in pregnancy: toxic or benign? (1 Nov 2012) Flaky chocolate and the New England Journal of Medicine (13 Nov 2012) Interpreting unexpected significant results (7 June 2013) Data analysis: Ten tips I wish I'd known earlier (18 Apr 2014) Data sharing: exciting but scary (26 May 2014)

Journalism/science communication
Orwellian prize for scientific misrepresentation (1 Jun 2010) Journalists and the 'scientific breakthrough' (13 Jun 2010) Science journal editors: a taxonomy (28 Sep 2010) Orwellian prize for journalistic misrepresentation: an update (29 Jan 2011) Academic publishing: why isn't psychology like physics? (26 Feb 2011) Scientific communication: the Comment option (25 May 2011) Accentuate the negative (26 Oct 2011) Publishers, psychological tests and greed (30 Dec 2011) Time for academics to withdraw free labour (7 Jan 2012) Novelty, interest and replicability (19 Jan 2012) 2011 Orwellian Prize for Journalistic Misrepresentation (29 Jan 2012) Time for neuroimaging (and PNAS) to clean up its act (5 Mar 2012) Communicating science in the age of the internet (13 Jul 2012) How to bury your academic writing (26 Aug 2012) High-impact journals: where newsworthiness trumps methodology (10 Mar 2013) Blogging as post-publication peer review (21 Mar 2013) A short rant about numbered journal references (5 Apr 2013) Schizophrenia and child abuse in the media (26 May 2013) Why we need pre-registration (6 Jul 2013) On the need for responsible reporting of research (10 Oct 2013) A New Year's letter to academic publishers (4 Jan 2014)

Social Media
A gentle introduction to Twitter for the apprehensive academic (14 Jun 2011) Your Twitter Profile: The Importance of Not Being Earnest (19 Nov 2011) Will I still be tweeting in 2013? (2 Jan 2012) Blogging in the service of science (10 Mar 2012) Blogging as post-publication peer review (21 Mar 2013) The impact of blogging on reputation ( 27 Dec 2013) WeSpeechies: A meeting point on Twitter (12 Apr 2014)

Academic life
An exciting day in the life of a scientist (24 Jun 2010) How our current reward structures have distorted and damaged science (6 Aug 2010) The challenge for science: speech by Colin Blakemore (14 Oct 2010) When ethics regulations have unethical consequences (14 Dec 2010) A day working from home (23 Dec 2010) Should we ration research grant applications? (8 Jan 2011) The one hour lecture (11 Mar 2011) The expansion of research regulators (20 Mar 2011) Should we ever fight lies with lies? (19 Jun 2011) How to survive in psychological research (13 Jul 2011) So you want to be a research assistant? (25 Aug 2011) NHS research ethics procedures: a modern-day Circumlocution Office (18 Dec 2011) The REF: a monster that sucks time and money from academic institutions (20 Mar 2012) The ultimate email auto-response (12 Apr 2012) Well, this should be easy…. (21 May 2012) Journal impact factors and REF2014 (19 Jan 2013)  An alternative to REF2014 (26 Jan 2013) Postgraduate education: time for a rethink (9 Feb 2013) High-impact journals: where newsworthiness trumps methodology (10 Mar 2013) Ten things that can sink a grant proposal (19 Mar 2013)Blogging as post-publication peer review (21 Mar 2013) The academic backlog (9 May 2013) Research fraud: More scrutiny by administrators is not the answer (17 Jun 2013) Discussion meeting vs conference: in praise of slower science (21 Jun 2013) Why we need pre-registration (6 Jul 2013) Evaluate, evaluate, evaluate (12 Sep 2013) High time to revise the PhD thesis format (9 Oct 2013) The Matthew effect and REF2014 (15 Oct 2013) Pressures against cumulative research (9 Jan 2014) Why does so much research go unpublished? (12 Jan 2014) The University as big business: the case of King's College London (18 June 2014)  

Celebrity scientists/quackery
Three ways to improve cognitive test scores without intervention (14 Aug 2010) What does it take to become a Fellow of the RSM? (24 Jul 2011) An open letter to Baroness Susan Greenfield (4 Aug 2011) Susan Greenfield and autistic spectrum disorder: was she misrepresented? (12 Aug 2011) How to become a celebrity scientific expert (12 Sep 2011) The kids are all right in daycare (14 Sep 2011)  The weird world of US ethics regulation (25 Nov 2011) Pioneering treatment or quackery? How to decide (4 Dec 2011) Psychoanalytic treatment for autism: Interviews with French analysts (23 Jan 2012) Neuroscientific interventions for dyslexia: red flags (24 Feb 2012)

Women
Academic mobbing in cyberspace (30 May 2010) What works for women: some useful links (12 Jan 2011) The burqua ban: what's a liberal response (21 Apr 2011) C'mon sisters! Speak out! (28 Mar 2012) Psychology: where are all the men? (5 Nov 2012) Men! what you can do to improve the lot of women ( 25 Feb 2014) Should Rennard be reinstated? (1 June 2014)

Politics and Religion
Lies, damned lies and spin (15 Oct 2011) A letter to Nick Clegg from an ex liberal democrat (11 Mar 2012) BBC's 'extensive coverage' of the NHS bill (9 Apr 2012) Schoolgirls' health put at risk by Catholic view on vaccination (30 Jun 2012) A letter to Boris Johnson (30 Nov 2013) How the government spins a crisis (floods) (1 Jan 2014)

Humour and miscellaneous Orwellian prize for scientific misrepresentation (1 Jun 2010) An exciting day in the life of a scientist (24 Jun 2010) Science journal editors: a taxonomy (28 Sep 2010) Parasites, pangolins and peer review (26 Nov 2010) A day working from home (23 Dec 2010) The one hour lecture (11 Mar 2011) The expansion of research regulators (20 Mar 2011) Scientific communication: the Comment option (25 May 2011) How to survive in psychological research (13 Jul 2011) Your Twitter Profile: The Importance of Not Being Earnest (19 Nov 2011) 2011 Orwellian Prize for Journalistic Misrepresentation (29 Jan 2012) The ultimate email auto-response (12 Apr 2012) Well, this should be easy…. (21 May 2012) The bewildering bathroom challenge (19 Jul 2012) Are Starbucks hiding their profits on the planet Vulcan? (15 Nov 2012) Forget the Tower of Hanoi (11 Apr 2013) How do you communicate with a communications company? ( 30 Mar 2014)

Sunday, 11 May 2014

Changing the landscape of psychiatric research:

What will the RDoC initiative by NIMH achieve?


©CartoonStock.com

There's a lot wrong with current psychiatric classification. Every few years, the American Psychiatric Association comes up with a new set of labels and diagnostic criteria, but whereas the Diagnostic and Statistical Manual used to be seen as some kind of Bible for psychiatrists, the latest version, DSM5, has been greeted with hostility and derision. The number of diagnostic categories keeps multiplying without any commensurate increase in the evidence base to validate the categories. It has been argued that vested interests from pharmaceutical companies create pressures to medicalise normality so that everyone will sooner or later have a diagnosis (Frances, 2013). And even excluding such conflict of interest, there are concerns that such well-known categories as schizophrenia and depression lack reliability and validity (Kendell & Jablensky, 2003).

In 2013, Tom Insel, Director of the US funding agency, National Institute of Mental Health (NIMH), created a stir with a blogpost in which he criticised the DSM5 and laid out the vision of a new Research Domain Criteria (RDoC) project. This aimed "to transform diagnosis by incorporating genetics, imaging, cognitive science, and other levels of information to lay the foundation for a new classification system."

He drew parallels with physical medicine, where diagnosis is not made purely on the basis of symptoms, but also uses measures of underlying physiological function that help distinguish between conditions and indicate the most appropriate treatment. This, he argued, should be the goal of psychiatry, to go beyond presenting symptoms to underlying causes, reconceptualising disorders in terms of neural systems.

This has, of course, been a goal for many researchers for several years, but Insel expressed frustration at the lack of progress, noting that at present: "We cannot design a system based on biomarkers or cognitive performance because we lack the data". That being the case, he argued, a priority for NIMH should be to create a framework for collecting relevant data. This would entail casting aside conventional psychiatric diagnoses, working with dimensions rather than categories, and establishing links between genetic, neural and behavioural levels of description.

This represents a massive shift in research funding strategy, and some are uneasy about it. Nobody, as far as I am aware, is keen to defend the status quo, as represented by DSM.  As Insel remarked in his blogpost: "Patients with mental disorders deserve better". The issue is whether RDoC is going to make things any better. I see five big problems.

1. McLaren (2011) is among those querying the assumption that mental illnesses are 'disorders of brain circuits'. The goal of the RDoC program is to fill in a huge matrix with new research findings. The rows of the matrix are not the traditional diagnostic categories: instead they are five research domains: Negative Valence Systems, Positive Valence Systems, Cognitive Systems, Systems for Social Processes, Arousal/Regulatory Systems, each of which has subdivisions: e.g. Cognitive Systems is broken down into Attention, Perception, Working memory, Declarative memory, Language behavior and Cognitive (effortful) control. The columns of the matrix are Genes, Molecules, Cells, Circuits, Physiology, Behavior, Self-Reports, and Paradigms. Strikingly absent is anything about experience or environment.

This seems symptomatic of our age. I remember sitting through a conference presentation about a study investigating whether brain measures could predict response to cognitive behaviour therapy in depression.  OK, it's possible that they might, but what surprised me was that no measures of past life events or current social circumstances were included in the study. My intuitions may be wrong, but it would seem that these factors are likely to play a role. My impression is that some of the more successful interventions developed in recent years are based not on neurobiology or genetics, but on a detailed analysis of the phenomenology of mental illness, as illustrated, for example, by the work of my colleagues David Clark and Anke Ehlers. Consideration of such factors is strikingly absent from RDoC.

 2. The goal of the RDoC is ultimately to help patients, but the link with intervention is unclear. Suppose I become increasingly obsessed with checking electrical switches, such that I am unable to function in my job. Thanks to the RDoC program, I'm found to have a dysfunctional neural circuit. Presumably the benefit of this is that I could be given a new pharmacological intervention targeting that circuit, which will make me less obsessive. But how long will I stay on the drug? It's not given me any way to cope with the tendency of checking the unwanted thoughts that obtrude into my consciousness, and they are likely to recur when I come off it.  I'm not opposed to pharmacological interventions in principle, but they tend not to have a 'stop rule'. 

There are psychological interventions that tackle the symptoms and the cognitive processes that underlie them more directly.  Could better knowledge of neurobiological correlates help develop more of these?  I guess it is possible, but my overall sense is that this translational potential is exaggerated – just as with the current hype around 'educational neuroscience'. The RDoC program embodies a mistaken belief that neuroscientific research is inherently better than psychological research because it deals with primary causes, when in fact it cannot capture key clinical phenomena. For instance, the distinction between a compulsive hand-washer and a compulsive checker is unlikely to have a clear brain correlate, yet we need to know about the specific symptoms of the individual to help them overcome them.

3. Those proposing RDoC appear to have a naive view of the potential of genetics to inform psychiatry.  It's worth quoting in detail from their vision of the kinds of study that would be encouraged by NIMH, as stated here:

Recent studies have shown that a number of genes reported to confer risk for schizophrenia, such as DISC1 (“Disrupted in schizophrenia”) and neuregulin, actually appear to be similar in risk for unipolar and bipolar mood disorders. ... Thus, in one potential design, inclusion criteria might simply consist of all patients seen for evaluation at a psychotic disorders treatment unit. The independent variable might comprise two groups of patients: One group would be positive and the other negative for one or more risk gene configurations (SNP or CNV), with the groups matched on demographics such as age, sex, and education. Dependent variables could be responses to a set of cognitive paradigms, and clinical status on a variety of symptom measures. Analyses would be conducted to compare the pattern of differences in responses to the cognitive or emotional tasks in patients who are positive and negative for the risk configurations.

This sounds to me like a recipe for wasting a huge amount of research funding. The effect sizes of most behavioural/cognitive genetic associations are tiny and so one would need an enormous sample size to see differences related to genotype. Coupled with an open-ended search for differences between genotypes on a battery of cognitive measures, this would undoubtedly generate some 'significant' results which could go on to mislead the field for some time before a failure to replicate was achieved (cf. Munafò, & Gage, 2013).

The NIMH website notes that "the current diagnostic system is not informed by recent breakthroughs in genetics". There is good reason for that: to date, the genetic findings have been disappointing. Such associations as are found either indicate extremely rare and heterogeneous mutations of large effect and/or involve common genetic variants whose small effects are not of clinical significance. We cannot know what the future holds, but to date talk of 'breakthroughs' is misleading.

4. Some of the entries in the RDoC matrix also suggest a lack of appreciation of the difference between studying individual differences versus group effects.  The RDoC program is focused on understanding individual differences. That requires particularly stringent criteria for measures, which need to be adequately reliable, valid and sensitive to pick up differences between people.  I appreciate that the published RDoC matrices are seen as a starting-point and not as definitive, but I would recommend that more thought goes into establishing the psychometric credibility of measures before embarking on expensive studies looking for correlations between genes, brains and behaviour. If the rank ordering of a group of people on a measure is not the same from one occasion to another, or if there are substantial floor or ceiling effects, that measure is not going to be much use as an indicator of an underlying construct. Furthermore, if different versions of a task that are supposed to tap into a single construct give different patterns of results, then we need a rethink – see e.g. Foti et al, 2013; Shilling et al, 2013, for examples.  Such considerations are often ignored by those attempting to move experimental work into a translational phase. If we are really to achieve 'precision medicine' we need precise measures.

5. The matrix as it stands does not give much confidence that the RDoC approach will give clearer gene-brain-behaviour links than traditional psychiatric categories.

For instance, BDNF appears in the Gene column of the matrix for the constructs of acute threat, auditory perception, declarative memory, goal selection, and response selection. COMT appears with threat, loss, frustrative nonreward, reward learning, goal selection, response selection and reception of facial communication. Of course, it's early days. The whole purpose of the enterprise is to flesh out the matrix with more detailed and accurate information. Nevertheless, the attempts at summarising what is known to date do not inspire confidence that this goal will be achieved.

After such a list of objections to RDoC, I do have one good thing to say about it, which is that it appears to be encouraging and embracing data-sharing and open science. This will be an important advance that may help us find out more quickly which avenues are worth exploring and which are cul-de-sacs. I suspect we will find out some useful things from the RDoC project: I just have reservations as to whether they will be of any benefit to psychiatry, or more importantly, to psychiatric patients.

References
Foti, D., Kotov, R., & Hajcak, G. (2013). Psychometric considerations in using error-related brain activity as a biomarker in psychotic disorders. Journal of Abnormal Psychology, 122(2), 520-531. doi: 10.1037/a0032618

Frances, A. (2013). Saving normal: An insider's revolt against out-of-control psychiatric diagnosis, DSM-5, big pharma, and the medicalization of ordinary life. New York: HarperCollins.

Kendell, R., & Jablensky, A. (2003). Distinguishing between the validity and utility of psychiatric diagnoses. American Journal of Psychiatry, 160, 4-12.

McLaren, N. (2011). Cells, Circuits, and Syndromes: A Critical Commentary on the NIMH Research Domain Criteria Project Ethical Human Psychology and Psychiatry, 13 (3), 229-236 DOI: 10.1891/1559-4343.13.3.229

Munafò, M. R., & Gage, S. H. (2013). Improving the reliability and reporting of genetic association studies. Drug and Alcohol Dependence(0). doi: http://dx.doi.org/10.1016/j.drugalcdep.2013.03.023

Shilling, V. M., Chetwynd, A., & Rabbitt, P. M. A. (2002). Individual inconsistency across measures of inhibition: an investigation of the construct validity of inhibition in older adults. Neuropsychologia, 40, 605-619.


This article (Figshare version) can be cited as:
 Bishop, Dorothy V M (2014): Changing the landscape of psychiatric research: What will the RDoC initiative by NIMH achieve?. figshare. http://dx.doi.org/10.6084/m9.figshare.1030210 

Saturday, 25 January 2014

What is educational neuroscience?

©CartoonStock.com

As someone who works at the interface of child development and neuroscience, I've been struck by the relentless rise of the sub-discipline of 'educational neuroscience'. New imaging technologies have led to a burgeoning of knowledge about the developing brain, and it is natural to want to apply this knowledge to improving children's learning. Centres for educational neuroscience have sprung up all over the place, with support from universities who see them as ticking two important boxes: interdisciplinarity and impact.

But at the heart of this enterprise, there seems to be a massive disconnect. Neuroscientists can tell you which brain regions are most involved in particular cognitive activities and how this changes with age or training. But these indicators of learning do not tell you how to achieve learning. Suppose I find out that the left angular gyrus becomes more active as children learn to read. What is a teacher supposed to do with that information?

As John Bruer pointed out back in 1997, the people who can be useful to teachers are psychologists. Psychological experiments can establish the cognitive underpinnings of skills such as reading, and can evaluate which are the most effective ways of teaching, and whether these differ from child to child. They can address questions such as whether there are optimal ages at which to teach different skills, how motivation and learning interact, and whether it is better to learn material in large chunks all at once or spaced out over intervals. At a trivial level, these could all be designated as aspects of 'educational neuroscience', insofar as the brain is necessarily involved in cognition and motivation. But they can all be studied without taking any measurements of brain function.

It is possible, of course, to look at the brain correlates of all of these things, but that's unlikely to influence what's done in the classroom. Suppose I want to see whether training in phonological awareness improves children's reading outcomes. I measure brain activation before and after training, and compare results with those of a control group who don't get the training. There are various possible patterns of results, as laid out in the table below:


As pointed out by Coltheart and McArthur (2012), what matters to the teacher is whether the training is effective in improving reading. It's really not going to make any difference whether detectable brain changes have happened, so either outcome A or B would give good justification for adopting the training, whereas outcomes C and D would not.

Well, you might say, children differ, and the brain measures might show up differences between those who do and don't respond to training. Indeed, but how would that be useful educationally? I've seen several studies that propose brain scans might be useful in identifying which children will and won't benefit from an intervention. That's a logical possibility, but given that brain scanning costs several hundred pounds per person, it's not realistic to suggest this has any utility in the real world, especially when there are likely to be behavioural indicators that predict outcomes just as well.

So are there actual or potential examples of how knowledge of neuroscience - as opposed to psychology - might influence educational practice? I mentioned three examples in this review: neurofeedback, neuropharmacology and brain stimulation are all methods that focus directly on changing the brain in ways that might potentially affect learning, and so could validly be designated as educational neuroscience. They are, however, as yet exploratory and experimental. The last of these, brain stimulation, was described this week in a blogpost by Roi Cohen Kadosh, who notes promising early results, but emphasizes that we need more experimental work establishing both risks and benefits before we could consider direct application of this method to improving children's learning.

I'm all in favour of cognitive neuroscience and basic research that discovers more about the neural underpinnings of typical and atypical development. By all means, let's do such studies, but let's do them because we want to find out more about the brain, and not pretend it has educational relevance.

If our goal is to develop better educational interventions, then we should be directing research funds into well-designed trials of cognitive and behavioural studies of learning, rather than fixating on neuroscience. Let me leave the last word to Hirsh-Pasek and Bruer, who described a Chilean conference in 2007 on Early Education and Human Brain Development. They noted: "The Chilean educators were looking to brain science for insights about which type of preschool would be the most effective, whether children are safe in child care, and how best to teach reading. The brain research presented at the conference that day was mute on these issues. However, cognitive and behavioral science could help."

References
Bishop, D. V. M. (2013). Neuroscientific studies of intervention for language impairment in children: interpretive and methodological problems Journal of Child Psychology and Psychiatry, 54 (3), 247-259 DOI: 10.1111/jcpp.12034

Bruer, J. T. (1997). Education and the brain: A bridge too far. Educational researcher, 26(8), 4-16. doi: 10.3102/0013189X026008004

Coltheart, M., & McArthur, G. (2012). Neuroscience, education and educational efficacy research. In M. Anderson & S. Della Sala (Eds.), Neuroscience in Education (pp. 215-221). Oxford: Oxford University Press.

This article (Figshare version) can be cited as: 
Bishop, Dorothy V M (2014): What is educational neuroscience?. figshare.
http://dx.doi.org/10.6084/m9.figshare.1030405

Thursday, 10 October 2013

On the need for responsible reporting of research to the media

This was one of the first tweets I saw when I woke up this morning :


In response, a parent of two girls with autism tweeted "gutted to read this. B's statement has been final for 1 yr but no therapy has been done. we're still waiting."

I was really angry. A parent who is waiting for therapy for a child has many reasons to be upset. But the study described on the BBC Website did NOT identify a 'critical window'. It was not about autism and not about intervention.

I was aware of the study because I'd been asked by the Science Media Centre to comment on an embargoed version a couple of days ago.

These requests for commentary on embargoed papers always occur very late in the day, which makes it difficult to give a thorough appraisal. But I felt I'd got the gist: the researchers had recruited 108 children aged between 1 and 6 years and done scans to look at the development of white matter in the brain. They also gave children a well-known test of cognitive development, the Mullen scales, which assesses language, visual and fine motor skills. It's not clear where the children came from, but their scores on the Mullen scales were pretty average, and as far as I can tell, none of them had any developmental disorders.

The researchers were particularly interested in lateralisation: the tendency to have more white matter on one side of the brain than the other. Left-sided lateralisation of white matter in some brain regions is well-established in adults but there's been debate as to whether this is something that develops early in life, or whether it is present from birth. In the introduction, the authors state that this lateralisation is strongly heritable, but although that's often claimed, the evidence doesn't support it (Bishop, 2013). A preponderance of white matter in the left hemisphere is of interest because in most people, the left side of the brain is strongly involved in language processing.

The authors estimated lateralisation in numerous regions of the left and right brain using a measure termed the myelin water fraction. Myelin is a fatty sheath that develops around the axons of cells in the brain, leading to improved efficiency of neural transmission. Myelination is a well-established phenomenon in brain development.

The main findings I took away from the paper were (a) myelin is asymmetrically distributed in the brains of young children, with many regions showing greater myelin density in the left than the right; (b) although the amount of myelin increases with age, the extent of lateralisation is stable from 1 to 6 years. This is an important finding.

The authors, however, put most focus on another aspect of the study: the relationship between myelin lateralisation and language level. Overall, there was no relationship with asymmetry of a temporal-occipital region that overlapped with the arcuate fasciculus, a fibre tract important for language that previously had given rather inconsistent results (see Bishop, 2013). However, looking at a total of eight brain regions and four cognitive measures, they found two regions where leftward asymmetry was related to language or visual measures, and one where rightward asymmetry was related to expressive and receptive language.

Their primary emphasis, however, was on another finding, that there were interactions between age and lateralisation, so that, for instance, left-sided lateralisation of myelin in a region encompassing caudate/thalamus and frontal cortex only became correlated with language level in older children. I found it hard to know how much confidence to place in this result: the authors stated that they corrected for multiple comparisons using false discovery rate, but if, as seems the case, they looked at both main effects and interaction terms in 32 statistical analyses, then some of these findings could be chance.

Be that as it may, it is an odd result. Remember that this was a cross-sectional study and that on no index was there an age effect on lateralisation. So it does not show that changes in language ability - which are substantial over this age range - are driven by changes in lateralisation of myelin. So what do the authors say? Well, in the paper, they conclude "The data presented here are cross sectional, longitudinal analysis will allow us to confirm these findings; however, the changing interaction between ability and myelin may be mediated by progressive functional specialization in these connected cortical regions, which itself is partly mediated by environmental influences" (p. 16175). But this is pure speculation: they have not measured functional specialisation, and, as they appear to recognise, without longitudinal data, it is premature to interpret their results as indicating change with age.

If you've followed me so far, you may be wondering when I'm going to get on to the bit about intervention for autism and critical periods. Well, there's no data in this paper on that topic. So why did the BBC publish an account of the paper likely to cause dismay and alarm in parents of children with language and communication problems? The answer is because King's College London put out a press release about this study that contained at least as much speculation as fact. We are told that the study "reveals a particular window, from 2 years to the age of 4, during which environmental influence on language development may be greatest." It doesn't do anything of the kind. They say: "the findings help explain why, in a bilingual environment, very young typically developing children are better capable of becoming fluent in both languages; and why interventions for neurodevelopmental disorders where language is impaired, such as autism, may be much more successful if implemented at a very young age. " Poppycock.

A few months ago the same press office put out a similarly misleading press release about another study, quoting the principal researcher as stating: “Now we understand that this is how we learn new words, our concern is that children will have less vocabulary as much of their interaction is via screen, text and email rather than using their external prosthetic memory. This research reinforces the need for us to maintain the oral tradition of talking to our children.” As I noted elsewhere, the study was not about children, computers or word learning.

I can see that there is a problem for researchers doing studies of structural brain development. It can be hard to excite the general public about the results unless you talk about potential implications. It is frankly irresponsible, though, to go so far beyond your data that the headline is based on the speculation rather than the findings.

I am tired of researchers trying to make their studies relevant by dragging in potential applications to autism, schizophrenia, or dyslexia, when they haven't done any research on clinical groups. They need to remember that there are real people out there whose everyday life is affected by these conditions, and that neither they nor the media can easily discriminate what a study actually found from speculations about its implications. It is the duty of researchers and press officers to be crystal clear about that distinction to avoid causing confusion and distress.

POSTSCRIPT
11/10/13: Dr O'Muircheartaigh has commented below to absolve the KCL Press Office of any responsibility for the content of their press release. I apologise for assuming that they were involved in decisions about how to publicise this research and have reworded parts of this blogpost to remove that implication.


References 

Bishop, D. V. M. (2013). Cerebral asymmetry and language development: Cause, correlate, or consequence? Science, 340 (6138) DOI: 10.1126/science.1230531

O'Muircheartaigh, J., Dean, D. C., Dirks, H., Waskiewicz, N., Lehman, K., Jerskey, B. A., & Deoni, S. C. L. (2013). Interactions between white matter asymmetry and language during neurodevelopment. Journal of Neuroscience, 33(41), 16170-16177. doi: 10.1523/jneurosci.1463-13.2013