Thursday, 23 December 2010

A day working from home

copyright: www.CartoonStock.com
8.00 a.m. Resolve that today is the day I will seriously engage with referee comments on a paper. Best to go in to office at work, where there are no distractions right now. Get up and don tights, vests, jumper, warm trousers, boots, cardigan, gloves and quilted Canadian coat with hood. Look more like Michelin Man than usual.
8.30 a.m. Set off to walk to work. Wenceslas-like situation in front garden,and difficulty opening gate against foot-high accumulation of snow.
8.35 a.m. See neighbour slide on ice and tumble on back. Fortunately no bones broken. Persevere, optimistic that the main thoroughfares will be cleared of snow.
8.40 a.m. They aren’t. Snow appears to have melted briefly and then refrozen. See another person slide on ice and land on back.
8.45 a.m. Return home.
9.00 a.m. Turn on gas fire and clear desk in preparation for serious academic activity. Get sidetracked by gas bill and unfiled bank statements. Once all the surface debris cleared, decide desk has too many crumbs on it to be compatible with serious work. Engage in desk-cleaning.
9.15 a.m. Open email. Granting agency has sent me material for a proposal I’d agreed to review consisting of six pdfs. Departmental administrator has sent stern message that we all have to come into work or have our pay docked, unless we have explicit agreement to work from home.
9.20 a.m Email the few postdocs who haven’t already gone on holiday to tell them to ignore message from administrator.
9.30 a.m Have a quick look at Twitter. Everyone tweeting about snow or science funding.
9.35 a.m Bite the bullet and open the referee comments. Save as a new file called ‘response to referees’.
9.40 a.m Make a cup of coffee to steel myself
9.45 a.m Read the referee comments. Aargh, aargh, aargh.
9.50 a.m Remember my Australian collaborator has already sent some new analyses and suggestions for responding to referees. Download these.
9.55 a.m Re-read referee comments. Aargh, aargh, aargh, aargh.
10.00 a.m Print out referee comments. Printer not working. Demanding toner. It has insatiable appetite for toner. Take out toner and shake it about and put it back in. Printer sneers: You’ve tried that before and I’m not playing.  Go into cellar (brrr) where I have providently kept spare toner.
10.05 a.m Unable to get into toner box. Into kitchen for a knife. Side-tracked by sight of coffee jar. Make another cup of coffee.  Open toner box. Find piece of paper with instructions for toner replacement in 12 languages. Go through ritual of shaking toner, removing yellow bit, sliding blue bit up and down, replacing toner in printer. 
10.10 a.m Discover I can save children’s lives by sending old printer cartridge back to special address. Want to save children’s lives, so repackage old toner cartridge in box. Go on hunt for sellotape. Find sellotape. Seal up box. Discover label is inside box. Refind knife. Open box. Extract label. Reseal box. Stick label on box.
10.15 a.m Resend printer command. Nothing happens. Take toner cartridge out again, shake it about, and put back in.
10.20 a.m Belatedly realise I have sent document to be printed on default printer, which is at work. Cancel printer command and resend to home printer.
10.25 a.m Get printout of reviewer comments and re-read.  Aargh, aargh, aargh, aargh, aargh. Reviewer 1 argues there six major flaws with the paper and data need total reanalysis.
10.30 a.m  Distract myself from grief with a quick check of email. Two messages from Microban international warning me about bacteria in my kitchen at Christmas (how do they know about me and my kitchen?), one announcement that I have won 1.5 million Euros in a lottery I didn’t enter (could come in handy), and a request to review a manuscript.
10.35 a.m Twitter proves more interesting: everyone has either got a broken-down heating system, or is stuck in an airport somewhere. Feel inappropriately smug. Schadenfreude is a real phenomenon.
10.40 a.m Glance out of window and notice sadly that birds have failed to find piece of stale bread that I had hung up on tree, or nuts stuck in a bush. Deliberate on whether I should do more to encourage birds. Decide this is no good, and must return my attention to the comments.
10.45 a.m Download relevant-looking manuscript that I hope will provide some salvation. Read it and make notes.
11.15 a.m Well, that was quite interesting but no relevance at all for current paper.
11.20 a.m OK, ready to start thinking about reply to reviewer 1. He's one of those people who accuses you of saying something you haven’t and then reprimands you for it.  Grrr. Quick look at BBC News, to distract myself, and counteract irritaiton with more airport grief.
11.25 a.m Print out the original manuscript so I can see exactly what we did say. Do first pass at responses to reviewers.
11.50 a.m. Postman rings with parcel that won’t fit in letterbox. Need to negotiate snowy path to front gate. Find wellies. Struggle into wellies. Collect parcel from remarkably cheery postman. Return and spend 5 mins getting out of wellies.
12.00 p.m. Heavy snow has started falling! Husband has started bustling in kitchen to make seafood risotto for lunch. Yum!
12.05 p.m. Husband suggests that if I want risotto to be amazing rather than just fabulous, I should go and buy white wine. And that some red wine for mulling this evening would also be a good idea. Since he is (a) in tracksuit & slippers and (b) busy with his culinary art, and I am fully dressed, there is justice in this. Muse on advantages of living with licensed 9-to-9 store a stone’s throw away.
12.10 p.m. Don duvet-like coat and boots again and venture into snow, returning with 2 bottles wine and other sundry essentials, on assumption we may be snowed in for days. The street looks magical, like something out of Dickens.
12.15 p.m. Sad blackbird lands on window ledge and looks at me through window. Rummage in fridge for blackbird food and some water.  Only bird-friendly food I can find is olive bread and couscous. Very North Oxford.
12.25 p.m Check email. Boring, boring, boring, but takes 20 mins!
12.45 p.m. Risotto. Yum yum yum! Radio 4 full of tales of airport woe.
13.15 p.m. Quick twitter-scan and tweet to plug my latest blog.
13.25 p.m. Email check. Administrator now telling people to go home early as the Oxford buses are stopping early and they could get stranded. Have visions of a Heathrow-style psychology dept with people bedding down in the coffee area. Now! back to work.
13.30 p.m. Well, I’ve dealt with two comments from ref 2, both of which involve adding the reference number in two places where I’d left it out. Mild sense of achievement. I like ref 2.
13.40 p.m  One of our analyses involves computing intraclass correlation between an individual’s waveform and that of the group average. Referee wants us to leave out the individual when computing grand average.  I know that with sample size of 40+ this makes no difference, but I can’t find out where I checked this out, and so am now going to need to redo the analysis to make this point. But now I need to find the xls formula for the intraclass correlation, which has to be re-entered every time you want to use it in a workbook, and I can’t really remember how to do that either, though I’ve done it loads of times before. I even think I somewhere stored instructions of how to do this, but I can’t find them. So now I am having to run a Search. All to demonstrate to a reviewer something I know is the case and which is not going to make any difference whatsoever to the results.
14.25 p.m. OK, done that. Found formula, worked out how to reinstate it, tried it out on demonstration data and got satisfactory result. Whew.
Looked at email: Message to say my email account is being migrated from one account to another. Had been told this would happen but had forgotten. Scary. But good incentive to stop looking at email for a bit
14.45 p.m. Coffee break .Reading newspapers
15.00 p.m  Must read various articles sent by collaborator to contest arguments being made by reviewer. OK, brain in gear and articles printed out.
15.30 p.m. Husband decides more ingredients needed for culinary art, and is off to Sainsbury’s. Returns 1 min later saying car is entombed in snow and he needs bucket of hot water.
15.35 p.m. Two articles done, and two to go.  Husband returns saying he has removed snowy carapace from car, but now can’t get into it, as door are frozen shut. He sneers at idea we should check for advice on internet re this situation, and stomps off with more buckets of hot water. Internet advice is to use windscreen wiper fluid. Small problem. Our windscreen wiper fluid is in the car, which we can’t get into.
16.45 p.m Have read 4 articles, two of which were mostly incomprehensible and of dubious relevance. Still uncertain if I really need to do more analysis. Send email to collaborator in Australia. However, email system is in process of migrating and it seems uncertain as to whether it is working or not. Time for a small break and a twitter session.
17.00 p.m Need to read more so I can write explanatory section requested by reviewers. Three important articles to download and summarise.
18.00 p.m. Read two of them. V. good and helpful. Now on to inspect whether my email migrated OK.
18.30 p.m. Think it did, as I have messages telling me I have been bequeathed large sums of money, as well as two requests to review manuscripts. Time for some mulled wine.
Tomorrow is another day. Resolve, no more tweeting, email, shopping, bird tending, until paper is revised. But maybe just a little blog about my day.....

Wednesday, 22 December 2010

“Neuroprognosis” in dyslexia

Every week seems to bring a new finding about brains of people with various neurodevelopmental disorders. The problem is that the methods get ever more technical, and so the scientific papers get harder and harder to follow, even if, like me, you have some background in neuropsychology.  I don’t normally blog about specific papers, but various people have asked what I think of an article that was published this week by Hoeft et al in Proceedings of National Academy of Science, and so I thought I’d have a shot at (a) summarising what they found in understandable language, and (b) giving my personal evaluation of the study. The bottom line is that the paper seems methodologically sound, but I’m nevertheless sceptical because (a) I’m always sceptical, and (b) there are some aspects of the results that just seem a bit hard to make sense of.

What question did the researchers ask?

Fig 1: location of IFG
The team, headed by researchers from Stanford University of Medicine, took as their starting point two prior observations.
First, previous studies have found that children with dyslexia look different from normal readers when they do reading tasks in a brain scanner. As well as showing under-activation of regions normally involved in reading, dyslexics often show over-activation in the frontal lobes, specifically in the inferior frontal gyri (IFG) (Figure 1).
The researchers were interested in the idea that this IFG activation could be a sign that dyslexics were using different brain regions to compensate for their dyslexia. They reasoned that, if so, then the amount of IFG activity observed on one occasion might predict the amount of reading improvement at a later occasion.
So the specific question was: Does greater involvement of the IFG in reading predict future long-term gains in reading for children with dyslexia?

Who did they study?
The main focus was on 25 teenagers with dyslexia, whose average age was 14 years at the start of the study. The standard deviation was 1.96 years, indicating that most would have been 12 to 16 years old.  There were 12 boys and 13 girls. They were followed up 2.5 years later, at which point they were subdivided, according to how much progress they’d made on one of the reading measures, into a group of 12 with ‘no-gain’ and 13 with ‘reading-gain’.

The criteria for dyslexia were that, at time 1, (a) performance was in the bottom 25% for age on a composite measure based on two subtests of the Test of Word Reading Efficiency (TOWRE), and (b) performance on a nonverbal IQ subtest (WASI Matrices) was in normal limits (within 1 SD of average).  The TOWRE is a speeded reading test with two parts: reading of real words, and reading of made-up words – the latter subtest picks up difficulties with converting letters into sounds. The average scores are given in Supporting Materials, and confirm that these children had a mean nonverbal ability score of 103 and a mean TOWRE score of 80. This is well in line with how dyslexia is often defined, and confirms they were children of normal ability with significant reading difficulties.

In addition, a ‘control’ group of normal readers was recruited, but they don’t feature much in the paper. The aim of including them was to see whether the same brain measures that predicted reading improvement in dyslexics would also predict reading improvement in normal readers.  However, the control children were rather above average in reading to start with, and, perhaps not surprisingly, they did not show any improvement over time beyond that which you’d expect for their age.

How was reading measured?

Children were given a fairly large battery of tests of reading, spelling and related skills, in addition to the TOWRE, which had been used to diagnose dyslexia. These included measures of reading accuracy (how many words are read correctly from a word list), reading speed, and reading comprehension (how far the child understands what is read). The key measure used to evaluate reading improvement over time was the the Word Identification Subtest from the Woodcock Reading Mastery Test (WID).

It is important to realise that all test scores are shown as age-scaled scores. This allows us to ignore the child’s age, as the score just indicates how good or bad the child is relative to others of the same age. For most measures, the scaling is set so that 100 indicates an average score, with standard deviation (SD) of 15 . You can tell how abnormal a score is by seeing how many SDs it is from the mean; around 16% of children get a score of 85 or less (1 SD below average), but only 3% score 70 or less (2 SD below average).  At Time 1, the average scores of the dyslexics were mostly in the high 70s to mid 80s, confirming that these children are doing poorly for their age.

When using age-scaled scores, the expectation is that, if a child doesn’t get better or worse relative to other children over time, then the score will stay the same. So a static score does not mean the child has learned nothing: rather they have just not changed their position relative to other children in terms reading ability.

Another small point: scaled scores can be transformed so that, for instance, instead of being based on a population average of 100 and SD of 15, the average is specified at 10 and SD as 3. The measures used in this study varied in the scaling they used, but I transformed them so they are all on the same scale: average 100, SD 15. This makes it easier to compare effect sizes across different measures (see below).

How was brain activity measured?
Functional magnetic resonance imaging (fMRI) was used to measure brain activity while the child did a reading task in a brain scanner at Time 1. The Wikipedia account of fMRI gives a pretty good introduction for the non-specialist, though my readers may be able to recommend better sources.  The reading task involved judging whether two written words rhymed, e.g. bait-gate (YES) or price-miss (NO). Brain activity was also measured during rest periods, when no task was presented, and this was subtracted from the activity during the rhyme task. This is a standard procedure in fMRI that allows one to see what activation is specifically associated with task performance. Activity is measured across the whole brain, which is subdivided into cubes measuring 2 x 2 x 2 mm (voxels). For each voxel, a measure indicates the amount of activation in that region. There are thousands of voxels, and so a huge amount of data is generated for each person.

The researchers also did another kind of brain imaging measurement, diffusion tensor imaging (DTI). This measures connectivity between different brain regions, and reflects aspects of underlying brain structure. The DTI results are of some interest, but not a critical part of the paper and I won’t say more about them here.

No brain imaging was done at Time 2 (or if it was it was not reported). This was because the goal of the study was to see whether imaging at one point in time could predict outcome later on.


How were the data analysed?

The dyslexics were subdivided into two groups, using a median split based on improvement on the WID test. In other words, those showing the least improvement formed one group (with 12 children) and those with most improvement formed the other (13 children).

The aim, then, was to see how far (a) behavioural measures, such as initial reading test scores, or (b) fMRI results were able to predict which group children came from.

Readers  may have come across a method that is often used to do this kind of classification, known as discriminant function analysis. The basic logic is that you take a bunch of measures, and allocate a weighting to each measure according to how well it distinguishes the two groups. So if the measure had the same average score for both groups, the weighting would be zero, but if it was excellent at distinguishing them, the weighting might be 1.0. You then add together all the measures, multiplied by their weightings, with the aim of getting a total score that will do the best possible job at distinguishing groups.  You can then use this total score to predict, for each person, which group they belong to. This way you can tell how good the prediction is, e.g. what percentage of people are accurately classified.

The extension of this kind of logic to brain imaging is known as multivariate pattern analysis (MVPA). It is nicely explained, with diagrams, on Neuroskeptic’s blog. .  For a more formal tutorial, see www.princeton.edu/~fpereira/Papers/tutorial.pdf.

It has long been recognised that there’s a potential problem with this approach, as it can give you spuriously good predictions, because the method will capitalise on chance fluctuations in the data that are not really meaningful. This is known as ‘over-fitting’. One way of getting around this is to use the leave-one-out method.  You repeatedly run the analysis, leaving out data from one participant, and then see if you could predict that person’s group status from the function derived from all the other participants. This is what was done in this study, and it is an accepted method for protecting against spurious findings.

Another way of checking that the results aren’t invalid is to directly estimate how likely it would be to get this result if you just had random data. To do this, you assign all your participants a new group code that is entirely arbitrary, using random numbers. So every person in the study has a 50% chance of being in group A or group B. You then re-run the analysis and see whether you can predict whether a person is an A or B on the basis of the same brain data. If you can, this would indicate you are in trouble, as the groups you have put in to the analysis are arbitrary. Typically, one re-runs this kind of arbitrary analysis many times, in what is called a permutation analysis; if you do it enough times, occasionally you will get a good classification result by chance, but that does not matter, so long as the likelihood of it occurring is very rare, say less than 1 in 1000 runs.  For readers with statistical training, we can say that the permutation analysis is a nice way of getting a direct estimate of the p-value associated with the analysis done with the original groups.

So what did they find?



View My Stats
Fig 2: discriminant function (y-axis) vs reading gain


The classification accuracy of the method using the whole-brain fMRI data was reported as an impressive 92%, which was well above chance.  Also, the score on the function used to separate groups was correlated .73 with the amount of reading improvement. The brain regions that contributed most to the classification included the right IFG, and left prefrontal cortex, where greater improvement was associated with higher activation. Also the left parietotemporal region showed the opposite pattern, with greater improvement in those who showed less activation.

So could the researchers have saved themselves a lot of time and got the same result if they’d just used the time 1 behavioural data as predictors? They argue not. The prediction from the behavioural measures was highly significant, but not as strong, with accuracy reported (figure S1 of Supporting Materials) as less than 60%.  Also, once the brain measures had been entered into the equation, adding behavioural measures did not improve the prediction.

And what conclusions did they draw?

  • Variation in brain function predicts reading improvement in children with dyslexia. In particular, activation of the right IFG during a reading task predicts improvement. However, taking a single brain region alone does not give as good a prediction as combining information across the whole brain.
  • Brain measures are better than behavioural measures at predicting future gains in reading.
  • This suggests that children with dyslexia can use the right IFG to compensate for their reading difficulties.
  • Dyslexics learn to read by using different neural mechanisms than those used by normal readers.

Did they note any limitations of the study?
  • It’s possible that different behavioural measures might have done a better job in predicting outcomes.
  • It’s also possible that a different kind of brain activation task could have given different results.
  • Some children had received remediation during the course of the study: but this didn’t affect their outcomes. (Bad news for those doing the remediation!).
  • Children varied in IQ, age, etc, but this didn’t differentiate those who improved and those who didn’t.

Questions I have about the study

Just how good is the prediction from the brain classifier?

Figure 2 (above) shows on the y-axis the discriminant function (the hyperplane), which is the weighted sum of voxels that does the best job of distinguishing groups. The x-axis shows the reading gain. As you can see clearly, there are two individuals who fall in the lower right quadrant, i.e. they have low hyperplane scores, and so would be predicted to be no-gain cases, but actually they make positive gains. The figure of 92% appears to come by treating these as cases where prediction failed, i.e. accurate prediction for the remainder gives 23/25 = 92% correct.
Fig 3: Vertical line dividing groups moved

However, this is not quite what the authors said they did.  They divided the sample into two equal-sized groups (or as equal as you can get with an odd number) in order to do the analysis, which means that the ‘no improvement’ group contains four additional cases, and that the dividing line for quadrants needs to be moved to the right, as shown in Figure 3.  Once again, accurate prediction occurs for those who fall in the top right quadrant, or the bottom left. Prediction is now rather less good, with 4 cases misclassified (three in the top left quadrant, one in the bottom right, i.e. 84% correct).However, it must be accepted that this is still good prediction.


 
Why do the reading gain group improve on only some measures?
One odd feature of the data is the rather selective nature of the reading improvement seen in the reading-gain group. Table 1 shows the data, after standardising all measures to a mean of 100, SD 15. The analysis used the WRMT-WID test, which is shown in pink. On this test, and on the other WMRT tests, the reading-gain group do make impressively bigger gains than the no-gain group. But the two groups look very similar on the TOWRE measures, which were used to diagnose dyslexia, and also on the Gray Oral Reading Test (GORT).  Of course, it’s possible that there is something critical about the content of the different tests – the GORT involves passage-reading, and the TOWRE involves speeded reading of lists of items.  But I’d have been a bit more convinced of the generalisability of the findings, if the reading improvement in the reading-gain group had been evident across a wider range of measures.(Note that we also should not assume all gain is meaningful: see my earlier blog for explanation of why).


Why do the control group show similar levels of right IFG activation to dyslexics?
The authors conclude that the involvement of certain brain regions, notably the right IFG, is indicative of an alternative reading strategy to that adopted by typical readers. Yet the control group appear to show as wide a range of activation of this area as the dyslexics, as shown in Figure 4. The authors don’t present statistics on this, but eyeballing the data doesn’t suggest much group difference.
Figure 4: activation in dyslexics (red) and controls (blue)


If involvement of the right IFG improves reading, why don’t dyslexic groups differ at time 1?

This is more of a logical issue than anything else, but it goes like this. Children who improve in reading by time 2 showed a different pattern of brain activation at time 1. The authors argue that right IFG activation predicts better reading. But at time 1, the groups did not differ on the reading measures – or indeed on their performance of the reading task in the scanner. This would be compatible with some kind of ‘sleeper’ effect, whereby the benefits of using the right IFG take time to trickle through. But what makes me uneasy is that this implies the researchers had been lucky enough to just catch the children at the point where they’d started to use the right IFG, but before this had had any beneficial effect.  So I find myself asking what would have happened if they’d started with younger children? 

Overall evaluation
This is an interesting attempt to use neuroimaging to throw light on mechanisms behind compensatory changes in brains of people with dyslexia.  The methodology appears very sound and clearly described (albeit highly technical in places). The idea that the IFG is involved in compensation fits with some other studies in the field.

There are, however, a few features of the data that I find a bit difficult to make sense of, and that makes me wonder about generalisability of this result.

Having said that, this kind of study is challenging. It is not easy to do scanning with children, and just collecting and assessing a well-documented sample can take many months. One then has to wait to follow them up more than two years later. The analyses are highly demanding.  I think we should see this as an important step in the direction of understanding brain mechanisms in dyslexia, but it’s far from being conclusive.

Hoeft F, McCandliss BD, Black JM, Gantman A, Zakerani N, Hulme C, Lyytinen H, Whitfield-Gabrieli S, Glover GH, Reiss AL, & Gabrieli JD (2011). Neural systems predicting long-term outcome in dyslexia. Proceedings of the National Academy of Sciences of the United States of America, 108 (1), 361-6 PMID: 21173250

Saturday, 18 December 2010

What's in a name?


In a recent blog post in the Guardian, Maxine Frances Roper discussed how her dyspraxia made it hard for her to get a job. She had major problems with maths and poor physical co-ordination and was concerned that employers were reluctant to make accommodations for these. The comments that followed the blog fell mostly in one of two categories: a) people who described their own (or their child’s) similar experiences; b) people who thought of dyspraxia as an invented disorder with no validity.

Although the article was about dyspraxia, it could equally well have been about developmental dyslexia, dyscalculia or dysphasia. These neurological labels are applied to children whose development is uneven, with selective deficits in the domains of literacy, mathematical skills, and oral language development respectively.  They are often described as neurodevelopmental disorders, a category which can be extended to encompass attention deficit hyperactivity disorder (ADHD), and autistic disorder. Unlike conditions such as Down syndrome or Fragile X syndrome, these are all behaviourally defined conditions that can seldom be pinned down to a single cause.  They are subject to frequent challenges as to their validity. ADHD, for instance, is sometimes described as a medical label for naughty children , and dyslexia as a middle-class excuse for a child’s stupidity.   Autism is a particularly interesting case, where the challenges are most commonly made by individuals with autism themselves, who argue they are different rather than disordered.

So, what does the science say? Are these valid disorders?  I shall argue that these medical-sounding labels are in many respects misleading, but they nevertheless have served a purpose because they get  developmental difficulties taken seriously. I’ll then discuss alternatives to medical labels and end with suggestions for a way forward.


Disadvantages of medical labels

1. Medical labels don't correspond to syndromes

Parents often have a sense of relief at being told their child is dyslexic, as they feel it provides an explanation for the reading difficulties. Most people assume that dyslexia is a clearcut syndrome with a known medical cause, and that affected individuals can be clearly differentiated from other poor readers whose problems are due to poor teaching or low intelligence.

In fact, that is not the case.  Dyslexia, and the other conditions listed above, are all diagnosed on the basis of behavioural rather than neurological criteria. A typical definition of developmental dyslexia specifies that there is a mismatch between reading ability and other aspects of cognitive development, which can’t be explained by any physical cause (e.g. bad eyesight) or poor teaching.  It follows that if you have a diagnosis of dyslexia, this is not an explanation for poor reading; rather it is a way of stating in summary form that your reading difficulties have no obvious explanation. 

But medicine progresses by first recognising clusters of symptoms and then identifying underlying causes for individuals with common patterns of deficits. So even if we don’t yet understand what the causes are, could there could be value in singling out individuals who meet criteria for dyslexia, and distinguishing them from other poor readers? To date, this approach has not been very effective. Forty years ago, an epidemiological study was conducted on the Isle of Wight: children were screened on an extensive battery of psychological and neurological measures.  The researchers were particularly interested in whether poor readers who had a large discrepancy between IQ and reading ability had a distinctive clinical profile.  Overall, there was no support for dyslexia as a distinct syndrome, and in 1976, Bill Yule concluded: “The era of applying the label 'dyslexic' is rapidly drawing to a close. The label has served its function in drawing attention to children who have great difficulty in mastering the arts of reading, writing and spelling, but its continued use invokes emotions which often prevent rational discussion and scientific investigation".(p 166).  Subsequent research has focused on specifying what it is about reading that is so difficult for children who struggle with literacy, and it’s been shown that for most of them, a stumbling block is in the process of breaking words into sounds, so-called phonological awareness.   However, poor phonological awareness is seen in poor readers of low IQ as well as in those with a mismatch between IQ and reading skill.

2. Medical labels don’t identify conditions with distinct causes

What about if we look at underlying causes? It's an exciting period for research as new methods make it possible to study the neurological and genetic bases of these conditions.  Many researchers in this field anticipated that once we could look at brain structure using magnetic resonance imaging, we would be able to identify ‘neural signatures’ for the different neurodevelopmental disorders. Despite frequent over-hyped reports of findings of ‘a brain scan to diagnose autism’ and so on, the reality is complicated.

I'm not attacking researchers who look for brain correlates of these conditions: we know far more now than we did 20 years ago about how typical and atypical brains develop, and basic neuroscience may help us understand the underlying processes involved, which in turn could lead to better diagnosis and intervention. But before concluding that a brain scan can be a feasible diagnostic test, we need studies that go beyond showing that an impaired group differs from an unimpaired group.  In a recent review of pediatric neuroimaging and neurodevelopmental disorders,  Giedd and Rapoport concluded: “The high variability and substantial overlap of most measures for most groups being compared has profound implications for the diagnostic utility of psychiatric neuroimaging” (p. 731) (my italics)

Similar arguments apply in the domain of genetics. If you are interested in the details, I have a blog explaining in more detail, but in brief, there are very few instances where a single genetic mutation can explain dyslexia, ADHD, autism and the rest. Genes play a role, and often an important one, in determining who is at risk for disorder, but it seems increasingly likely that the risk is determined by many genes acting together, each of which has a small effect in nudging the risk up or down. Furthermore, the effect of a given gene will depend on environmental factors, and the same gene may be implicated in more than one disorder. What this means is that research showing genetic influences on neurodevelopmental disorders does not translate into nice simple diagnostic genetic tests. 

3. No clear boundaries between individuals with different diagnostic labels

To most people, medical labels imply distinct disorders with clear boundaries, but in practice, many individuals have multiple difficulties.  Maxine Frances Roper’s  blogpost on dyspraxia illustrates this well: dyspraxia affects motor co-ordination, yet she described major problems with maths, which would indicate dyscalculia. Some of her commentators described cases where a diagnosis of dyspraxia was accompanied by a diagnosis of Asperger syndrome, a subtype of  autistic disorder. In a textbook chapter on neurodevelopmental disorders, Michael Rutter and I argued that pure disorders, where just one domain of functioning is affected, are the exception rather than the rule. This is problematic for a diagnostic system that has distinct categories, because people will end up with multiple diagnoses. Even worse, the diagnosis may depend on which professional they see. I know of cases where the same child has been diagnosed as having dyslexia, dyspraxia, ADHD, and “autistic spectrum disorder” (a milder form of autism), depending on whether their child is seen by a psychologist, an occupational therapist, a paediatrician or a child psychiatrist.

4. No clearcut distinction between normality and abnormality

There has been much debate as to whether the causes of severe difficulties are different from causes of normal variation. The jury is still out, but we can say that if there are qualitative differences between children with these neurodevelopmental disorders and typically developing children, we have yet to find them.  Twenty years ago, many of us expected that we might find single genes that caused SLI or autism, for instance, but although this sometimes occurs, it is quite exceptional.  As noted above, we are usually instead dealing with complex causation from a mixture of multiple genetic and environmental causes.  Robert Plomin and colleagues have argued, on the basis of such evidence, that ‘the abnormal is normal’ and that there are no disorders.

Consequences of abandoning medical labels 

Many people worry that if we say that a label like dyslexia is invalid, then we are denying that their child has real difficulties. This was brought home to me vividly when I was an editor of Journal of Child Psychology and Psychiatry. Keith Stanovich wrote a short piece for the journal putting forward arguments to the effect that there were no qualitative differences between poor readers of average or below average IQ, and therefore the construct of ‘dyslexia’ was invalid. This attracted a barrage of criticism from people who wrote in to complain that dyslexia was real, they worked with dyslexic children, and it was disgraceful for anyone to suggest that these children’s difficulties were fictional.  Of course, that was not what Stanovich had said. Indeed, he was very explicit: “Whether or not there is such a thing as 'dyslexia', there most certainly are children who read markedly below their peers on appropriately comprehensive and standardized tests. In this most prosaic sense, poor readers obviously exist.” (p. 580). He was questioning whether we should distinguish dyslexic children from other poor readers, but not denying that there are children for whom reading is a major struggle.  Exactly the same cycle of events followed a Channel 4 TV documentary, the Dyslexia Myth, which raised similar questions about the validity of singling out one subset of poor readers, the dyslexics, and giving them extra help and attention, when other poor readers, with very similar problems but lower IQs, were ignored. A huge amount of debate was generated, some of which featured in the Psychologist. Here again, those who had tried to make this case were attacked vehemently by people who thought they were denying the reality of children’s reading difficulties. 

Among those taking part in such debates are affected adults, many of whom will say ”People said I was stupid, but in reality I had undiagnosed dyslexia”. This is illuminating, as it stresses how the label has a big effect on people’s self-esteem. It seems that a label such as dyslexia is not viewed by most people as just a redescription of a person’s problems. It is seen as making them more real, emphasises that affected people are not unintelligent, and leads the condition to be taken more seriously than if we just say they have reading difficulties.


Should we abandon medical labels?

So what would the consequences be if we rejected medical labels? Here, it is fascinating to chart what has happened for different conditions, because different solutions have been adopted and we can compare and contrast the impact this has had. Let’s start with dyslexia. On the basis of the Isle of Wight study, Bill Yule and colleagues argued that we should abandon the term ‘developmental dyslexia’ and use instead the less loaded and more descriptive term ‘specific reading retardation’. Because of the negative connotations of ‘retardation’ their proposal did not take off, but the term ‘specific reading disability’ was adopted in some quarters. But, actually, neither term has really caught on.  When I did a bibliometric survey of studies on neurodevelopmental disorders, I tried to include all possible diagnostic labels as search terms. I've just looked  the frequency with which different terms were used to describe studies on developmental reading difficulties. Dyslexia won by a long margin, with over 97% of articles using this term.

Quite the opposite happened, though, with ‘developmental dysphasia’, which was used in the 1960s to refer to difficulties in producing and understanding spoken language in a child of otherwise normal ability.  This term was already going out of fashion in the UK and the USA in the 1970s, when I was doing my doctoral studies, and in my thesis I used ‘specific developmental language disorder’. Subsequently, ‘specific language impairment’ (SLI) became popular in the US research literature, but there is current concern that it implies that language is the only area of difficulty, when children often have additional problems.  Among practitioners, there is even less agreement, largely because of an explicit rejection of a ‘medical model’ by the profession of speech and language therapy (speech-language pathology in the US and Australia). So instead of diagnostic labels practitioners use a variety of descriptive terminology, including ‘language difficulties’, ‘communication problems’, and, most recently in the UK ‘speech, language and communication needs’ (SLCN). [If you've never heard of any of these and want to see how they affect children's lives, see http://www.afasicengland.org.uk].

There do seem to be important negative consequences, however. As Gina Conti-Ramsden has argued , specific language impairment (or whatever else you want to call it) is a Cinderella subject.  The amount of research funding directed to it is well below what you’d expect, given its frequency and severity, and it would seem that most members of the public have no idea what it is. Furthermore, if you say a child has ‘developmental dysphasia’, that sounds more serious and real than if you say they have ‘specific language impairment’. And to say they have language ‘difficulties’ or ‘needs’ implies to many people that those difficulties are fairly trivial.  Interestingly, there also seems to be an implicit assumption that, if you don’t have a medical label, then biological factors are unimportant, and you are dealing with problems with purely social origins, such as poor parenting or teaching.

An article by Alan Kamhi had a novel take on this issue. He argued that a good label had to have the properties of a meme. The concept of a meme was introduced by Richard Dawkins in the Selfish Gene,  and subsequently developed by Susan Blackmore in her book The Meme Machine. A meme is an element of culture that is transmitted from person to person, and a successful meme has to be easy to understand, remember and communicate to others. Importantly, it does not necessarily have to be accurate or useful.  Kamhi asked “Why is it more desirable to have dyslexia than to have a reading disability? Why does no one other than speech-language pathologists and related professionals seem to know what a language disorder is? Why is Asperger’s syndrome, a relatively new disorder, already familiar to many people?” (p. 105).  Kamhi’s answer is that terms with ‘language’ in them are problematic because everyone thinks they know what language is, but their interpretations differ from those of the professionals. I think there is some truth in this, but there is more to it than that. In general, I’d argue, the medical-sounding terms are more successful memes than the descriptive terms because they convey a spurious sense of explanation, with foreign and medical-sounding labels lending some gravity to the situation.

What to do?
We are stuck between the proverbial rock and hard place.  It seems that if we stick with medical-sounding labels for neurodevelopmental disorders, they are treated seriously and gain public recognition and research funding. Furthermore, they seem to be generally preferred by those who are affected by these conditions. However, we know these labels are misleading in implying that we are dealing with clearcut syndromes with a single known cause.

So here’s a proposal that attempts to steer a course through this morass. We should use the term ‘neurodevelopmental disability’ as a generic term, and then add a descriptor to indicate the areas of major difficulty. Let me explain why each part of the term is useful. “Neurodevelopmental” indicates that the child’s difficulties have a constitutional basis.  This is not the same as saying they can’t be changed, but it does move us away from the idea that these are some kind of social constructs with no biological basis. The evidence for a biological contributory causes is considerable for those conditions where there have been significant neurological and genetic investigations: dyslexia, SLI, autism and ADHD.

I suggest ‘disability’ rather than ‘disorder’ in the hope this may be more acceptable to those who dislike dividing humanity into the disordered and normal. Disability has a specific meaning in the World Health Organization classification, which focuses on the functional consequences of an impairment for everyday life. People who are the focus of our interest are having difficulties functioning at home, work or school, and so ‘disability’ seems a reasonable term to use.

It follows from what I’ve said above, that the boundary between disability and no disability is bound to be fuzzy: most problems fall on a scale of severity, and where you put the cutoff is arbitrary. But in this regard, neurodevelopmental disability is no different from many medical conditions. For instance, if we take a condition such as high blood pressure: there are some people whose blood pressure is so high that it is causing them major symptoms, and everyone would agree they have a disease. But other people may have elevated blood pressure and doctors will be concerned that this is putting health at risk, but where you actually draw the line and decide that treatment is needed is a difficult judgement, and may depend on presence of other risk factors. It’s common to define conditions such as dyslexia or SLI in terms of statistical cutoffs: the child is identified as having the condition if a score on a reading or language test is in the bottom 16% for their age. This is essentially arbitrary, but it is at least an objective and measurable criterion. However, test scores are just one component of diagnosis: a key factor is whether or not the individual is having difficulty in coping at home, work or school.

‘Neurodevelopmental disability’ alone could be used to indicate that the person has real difficulties that merit attention and support, but it lumps together a wide range of difficulties. That is no bad thing, however, given that many individuals have problems in several domains. The term would actively discourage the compartmentalised view of these different conditions, which leads to an unsatisfactory situation where, for instance, researchers in the US have difficulty doing research on the relationship between reading and language disabilities because these are seen as falling under the remit of different funding streams (NICHD and NIDCD respectively), or where a researcher who is studying language difficulties in autism will have much greater chance of obtaining funding (from NIMH) than one who is studying language difficulties in non-autistic children (which are far more common).

Having defined our generic category, we need to add descriptors that specify weaknesses and strengths. Identification of areas of weakness is crucial both for ensuring access to appropriate services, and to make it possible to do research on individuals with common characteristics. Table 1 shows how traditional medical categories would map on to this system, with a downward arrow denoting a problem area, and = denoting no impairment. But this is just to illustrate how the system corresponds to what we already have: my radical proposal is that we could do away with the labels in the top row.

Table 1: Traditional categories (top row) vs new system
A major advantage of this approach is that it would not force us to slot a person into one diagnostic category; rather it will encourage us to consider the whole gamut of developmental difficulties and document which apply in a given case. We know that many people with reading difficulties also have impairments in maths, oral language and/or attention: rather than giving the person a dyslexia label, which focuses on the reading difficulties, the full range of problem areas could be listed.  Intelligence does not feature in the diagnostic definition of autism, yet it makes a big difference to a person’s functioning if intelligence is in the normal range, or above average. Further some people with autism have major problems with literacy, motor skills or attention, others do not. This framework would allow us to specify areas of weakness explicitly, rather than implying that everyone with a common diagnostic label is the same. Further, it would make it easier to document change in functioning over time, as different areas of difficulty emerge or resolve with age.

In addition, a key feature of my proposed approach would be that assessment should also aim to discover any areas that parents or children themselves identify as areas of strength (up arrows), as fostering these can be as important as attempting to remediate areas of difficulty. If we take Maxine Frances Roper as an example, she evidently has good language and intelligence, so her profile would indicate this, together with weaknesses in maths and motor skills.

In the past, the only area of strength that anyone seemed interested in was IQ test performance.  Although this can be an important predictor of outcome, it is not all that matters, and to my mind should be treated just like the other domains of functioning: i.e., we note whether it is a weakness or strength, but do not rely on it to determine whether a child with a difficulty gains access to services.

When we consider people’s strengths, these may not be in cognitive or academic skills. Consider, for example, Temple Grandin. She is a woman with autism who has become a highly respected consultant in animal husbandry because of her unusual ability to put herself in the mind of the animals she works with. Obviously, not every person will have an amazing talent, but most will have some activities that they enjoy and can succeed in. We should try and find out what these are, and ensure they are fostered.

Will it happen?

Although I see this approach as logical and able to overcome many of the problems associated with our current diagnostic systems, I’d be frankly amazed if it were adopted.

For a start, it is complex and has resource implications. Few practitioners or researchers would have the time to do a comprehensive assessment of all the areas of functioning shown in Table 1. Nevertheless, many people would complain that this list is not long enough! What about memory, speech, spelling, executive function, or visuospatial skills, which are currently not represented but are studied by those interested in specific learning disabilities? The potential list of strengths is even more open-ended, and could encompass areas such as sports, music, craft and cookery activities, drama, ability to work with animals, mechanical aptitude and so on.  I’d suggest, though, that the approach would be tractable if we think about this as a two-stage procedure. Initial screening would rely on parent and/or teacher and/or self report to identify areas of concern. Suitable well-validated screening instruments are already available in the domains of language, attention, and social impairment, and this approach could be extended. Areas identified as specific weaknesses could then be the focus of more detailed assessment by a relevant professional.

The main reason I doubt my system would work is that too many people are attached to the existing labels. I’m sure many will feel that terms such as autism, ADHD, and dyslexia have served us well and there’s no need to abandon them.  Professional groups may indeed be threatened by the idea of removing barriers between different developmental disorders. And could we lose more than we gain by ditching terminology that has served us well, as least for some disorders?


Please add your comments

I certainly don’t have all the answers, but I am hoping that by raising this issue, I’ll stimulate some debate. Various academics in the US and UK have been talking about the particularly dire situation of terminology surrounding speech and language disorders, but the issues are broader than this, and we need to hear the voices of those affected by different kinds of neurodevelopmental disabilities, as well as practitioners and researchers.

With thanks to Courtenay Frazier Norbury and Gina Conti-Ramsden for comments on a draft of this post.


PS. 27th December 2010
A couple of relevant links:

More on failure of speech-language pathologists to agree on terminology for developmental language disorders.

Kamhi, A. G. (2007). Thoughts and reflections on developmental language disorders. In A. G. Kamhi, J. J. Masterson & K. Apel (Eds.), Clinical Decision Making in Developmental Language Disorders: Brookes.

A recent Ofsted report, concluding that many children with 'special educational needs' are just poorly taught. 

PPS. 19th June 2011
Problems with the term 'speech, language and communication needs':
Lindsay, G. (2011). The collection and analysis of data on children with speech, language and communication needs: The challenge to education and health services. Child Language Teaching & Therapy, 27(2), 135-150.


This article (Figshare version) can be cited as:
Bishop, Dorothy V M (2014): What's in a name?. figshare.
http://dx.doi.org/10.6084/m9.figshare.1022866


Tuesday, 14 December 2010

When ethics regulations have unethical consequences

I've been blogging away from home lately. On 1st December, Guardian Science published a guest blog from me relating to a recent PLOS One article, in which I examined the amount of research, and research funding, for different neurodevelopmental disorders. There are some worrying disparities between disorders, which need explanation.  I've already had lots of interesting emails and comments on the post, but I'm planning to revisit this topic shortly, and would welcome more input, so please feel free to add comments here if you wish.

My latest blog is a guest post for Science 3.0, on ethical (and unethical) issues in data-sharing.
How (some) researchers view ethics committees (IRBs)

Disclaimer: the author is vice-chair of the Medical Sciences Interdiscliplinary Research Ethics Committee at the University of Oxford, and notes that most committee members are on the side of the angels.

Friday, 26 November 2010

Parasites, pangolins, peer review, promotion and psychopathology

My latest blog has been posted on Guardian Science blogs. It’s on an unusual topic – neglected tropical diseases. I’m especially interested in the neglected neuropsychological consequences of these.
http://t.co/aWSZ3bD

In addition, here is a round-up of some of my favourite links from the past couple of months:


While we are on the theme of parasites, there is an excellent Ozzie All in the Mind. This podcast on parasites affecting behaviour is wonderful.

Anyone about to go on a flight might enjoy the late, great Dave Allen on flying:

Piece on peer review, with immortal Einstein quote: “I see no reason to address the - in any case erroneous - comments of your anonymous expert”

Come on sisters, get savvy : "more women than men appear to know little or nothing about promotion criteria and the process involved" http://bit.ly/bGoYnY

For those obsessed by their H-index, a cautionary note

A knitted skeleton.

A neurodevelopmental perspective on Winnie the Pooh

More psychopathological classification: neurotypical disorder

A robot with pudgy, beanbag-like hands.  Especially exciting after 2 min.

Fun Facts about Pangolins

Naomi Orestes:Merchants of Doubt "Media's attraction to conflict causes them to exaggerate the small number of people who disagree."
 
Evolution of the alphabet. Nicely done. http://www.gifbin.com/984203

Proof positive that tow-away men have no soul. Look at red circle area on left. http://www.zadan.nl/pics/timing/
 
THIS IS IMPORTANT
Are you concerned about control of the media falling into the hands of a small number of powerful people?  I am, which is why I have donated to 38degrees, to support their campaign to stop Rupert Murdoch gaining yet more control of UK media. Please look at this site and consider donating:
https://secure.38degrees.org.uk/donate-to-stop-murdoch

And finally.....

http://phylab.mtu.edu/~nckelley/Focus/

Thursday, 14 October 2010

The Challenge for Science: a speech by Colin Blakemore from 1998



When I wrote a blog in August on ‘How our current reward structures have distorted and damaged science’ I mentioned a speech I had heard Colin Blakemore give some years earlier at the British Association, in which he said some trenchant things about the Research Assessment Exercise. I am pleased to say that Colin was able to dig out the text of the speech, and has kindly agreed for me to post it here. It is an important document for two reasons: first, much of what it has to say remains relevant today, and second, it is of considerable historical interest, as it anticipated many subsequent developments. In particular, it highlighted:

1) The wider need for independent scientific advice, and the importance of embedding science at the heart of government;
2) The need for an independent department of science and a seat in Cabinet for the minister;
3) Deficiencies in the evaluation of science, especially the RAE;
4) The failure of British industry to invest adequately in R&D;
5) The need for a new approach to science education.

********************************************************************************************
The Challenge for Science
Colin Blakemore
University of Oxford
President of the British Association



Sir Walter Bodmer, Lord Mayor, Lord Crickhowell, Sir Donald Walters, Vice-Chancellor, Vice-Presidents, Pro-Vice Chancellors, Members of the University and the British Association.


Hoffwn I ddiolch y Prifysgol am fy wneud yn Gymrawd Anrhydeddus ac am estyn croeso cynnes I mi ac i’r British Association. 

In these days of the news flash and the executive summary, it is a rare privilege to have 45 minutes to speak on any subject. But let me start with the obligatory sound bite. This is a tale of two sheep.

The first sheep is pickled in formaldehyde, not for scientific examination but for the amusement of the chattering classes. This sheep is, of course, the product of Damien Hirst, the enfant terrible of the cool Britannia art scene. When he won the Turner Prize in 1994, young Damien confessed: "It's amazing what you can do with an E grade in A-level art, a twisted imagination and a chainsaw". The sculptor, Richard Wentworth, who taught Hirst at Goldsmiths' College, says that he has "fantastic penetrative power". Must be the chain saw, I presume! But Damien certainly has the respect of the guardians of British culture. His split and pickled animals have earned him more than £1 million. And he was voted on to BBC Radio 3's list of 'Centurions' - the 100 people who have made the greatest cultural contribution in the 20th century.

The second sheep in my story lives in a paddock at the Roslin Institute, just outside Edinburgh. It isn't pickled. Like most other female sheep in this country, it had a lamb earlier this year. It isn't in any way unusual, but that's what makes it amazing. Its name, of course, is ‘Dolly’ - the first mammal ever cloned from a somatic cell. Dolly rivals Damien Hirst's sheep in notoriety, but took somewhat more than an E in A-level art and a lot of balls to make. Indeed, Dolly took no balls at all!

Dolly's creators, Dr Ian Wilmut and his colleagues, are not among Radio 3's Centurions.

Just 16 months before the end of a millennium is as good a time as any to reminisce. During the past 100 years, what has Britain given the world? Damien Hirst, of course. And some truly great artists and writers. A modest contribution to classical music; much more to Pop. And a glittering array of dancers, conductors, film makers, designers, choreographers, and actors. But arguably its most significant, enduring and internationally recognized contribution to 20th century culture has been its science.

The list of British achievements, in relation to our size and our expenditure on science, is truly astounding. In molecular and cellular biology, to which Dolly the sheep is just a recent contribution, British scientists have a particularly impressive record. The work of Crick, Watson, Wilkins and Franklin on the structure of DNA stands out, of course. But think too of Krebs, Todd, Sanger, Perutz, Kendrew, Klug, Porter, Brenner, Gurdon. I hesitate even to mention names, for fear of offending the string of British scientists who virtually invented molecular biology, which will change our lives beyond recognition in the 21st century.

In other areas of biology too, Britain has led the world. The mechanism of the nerve impulse, of muscle contraction, of chemical transmission at nerve-muscle junctions and at synapses in the brain, the processing of information in the nervous system: Britons have won Nobel Prizes for laying the foundations of all these fields.

And in the physical sciences too, the record this century is amazing. Twenty-one British winners of the Nobel Prize in Physics, 23 in chemistry.

We are depressingly fond of saying that Britain has done brilliant basic research but has failed to turn discovery into practical and commercial application. But that is misplaced modesty. Simon Jenkins reminded us earlier this afternoon of Jacob Bronowski’s comment: "The essence of science: ask an impertinent question, and you are on the way to a pertinent answer." British discoveries have, for instance, propelled the spectacular advance of medical science. Think of the practical impact of the pioneering work by Ross on the transmission of malaria, by Gowland Hopkins on vitamins, by Medawar on graft rejection. Think of Doll's painstaking demonstration of the link between smoking and cancer; Vane's discovery of the prostaglandins and Isaacs and Lindenman's of the interferons. Think of the medical importance of JBS Haldane's concept of genetic linkage analysis, Fisher’s foundation of modern statistics, Koehler and Milstein's techniques for the production of monoclonal antibodies. And of course, Fleming, Florey, Chain and Abrahams gave us the miracle of penicillin and cephalosporin. Britain pioneered in vitro fertilization and is now playing a leading role in the genetic analysis of human disease. And the British pharmaceutical industry has made an enormous contribution to drug development: anti-ulcer drugs, new forms of cancer chemotherapy, many successful vaccines, Retrovir (the first marketed treatment for AIDS), and new drugs for epilepsy and rheumatoid arthritis. Now the newspapers tell us that even Viagra itself was invented by a British scientist - who has 5 children!

Radar, the jet engine, television, the chemistry of fermentation, the hologram, supercurrent tunnelling, confocal microscopy, thermionic phenomena, the first programmable computer, Nuclear Magnetic Resonance Spectroscopy, the Hovercraft, computer tomography, genetic fingerprinting, X-ray crystallography, and, of course, mammalian cloning. Britain has played a major role in all these scientific and technological developments, whose practical and economic significance is immense.

The creative outpouring of British science in the past century has been a cause of envy and admiration around the world. As Sir Robert May, Chief Scientific Adviser to the government, pointed out, in an article in the journal Science last year, British scientists have outstripped every nation in the world, bar the United States, in their record of major international prizes for science (per head of the population). Yet there is not a single scientist on Radio 3's list of cultural superstars. Indeed, that list of Centurions was deliberately limited to artists, writers and philosophers; and, as far as I know, the BBC has no plans for a comparable tribute to British science. 

Science at the head of the agenda 

Tony Blair writes in his introduction to the programme of this Festival "With the new millennium ahead, we cannot afford to be complacent and to live on past glories alone." In that case, for what new glories will Britain be known at the end of the next century? Will it still have such a remarkable reputation for science?

If I had been giving this Address two months ago, I could only have said that the future of British science did not look very rosy. The OECD estimated that the UK public spend on R&D fell from 0.73% of Gross Domestic Product in 1981 to 0.43% in 1996. Even by 1994, UK government spending on university research, per capita of the labour force, was roughly one-third of the level in Switzerland and Sweden, half that of the USA, France and Germany. We were 16th out of the 18 nations in the OECD league tables, just behind Iceland. Now, I've got nothing against Iceland: it looks a beautiful place from the aeroplane. But I have to admit that I don't know the name of a single Icelandic scientist! The government's own published analysis shows an almost continuous decline in gross expenditure on R&D (by both government and industry) through the Nineties, to just 1.94% of GDP in 1996 (compared with 2.52% for the United States and 2.77% for Japan).

Earlier this year I attended the centenary annual meeting of our young daughter organization, the American Association for the Advancement of Science, where President Clinton promised to raise the US budget for basic science by $1.2 billion in 1999, the largest increase ever. He also proposed the establishment of a $31 billion “21st Century Research Fund”, with the aim of doubling federal funding for basic research in the coming decade. With bipartisan support there is now talk in Congress of increasing spending by a factor of four! The Japanese government has also recently given a 12% increase in science funding, despite the current economic crisis; or, more accurately, I might say because of the crisis. The Japanese National Institute of Science and Technology Policy has estimated that the doubling of spending by the government on R&D by the year 2000 will result in a 1% increase in the rate of growth of the economy between 2005 and 2010.

Over the last two years of Tory rule, the situation was very generally acknowledged to have become critical, with the Science Budget (the expenditure of the research councils and the Office of Science and Technology itself) actually falling in real terms, despite the broad agreement that the appropriate inflator for the cost of research far exceeds the Retail Price Index. The budget for this year, estimated by Save British Science as the lowest for 27 years, was, of course, inherited by new Labour, which came in with a commitment to maintaining spending limits until the Comprehensive Spending Review.

The scientific community had grown so accustomed to being fobbed off with massaged statistics and promises of jam tomorrow, that there was no great optimism about the result of the Comprehensive Spending Review. The outcome, which we have heard today about from Lord Sainsbury and Sir John Cadogan, is all the sweeter because of that. I use this opportunity to say, on behalf of all my colleagues, how grateful we are to Sir John, to Bob May, to John Battle and to Margaret Becket for the case that they must have presented on behalf of science. We thank the government for this recognition of the value of science and the Wellcome Trust for providing £400 million of the £1.4 billion of additional funding over the coming three years.

In an unprecedented editorial in the journal Science just two weeks ago, Tony Blair described the increases in funding, confirmed his view that "the science base is the absolute bedrock of our economic performance" and asserted that success in science "will help to realize the creative potential of the next generation". For the first time in 20 years, we have clear signs that the government recognizes the central importance of science, not just as the fount of innovation for industrial success, but at the heart of the nation's culture for the 21st century.

How can we make the best use of this new funding? How can we build on this gesture of support from the government, and bring science fully to the service of the nation? In all areas of public life we are being told by the new Labour government to "think the unthinkable". With the hope that they are willing to listen to the "unthinkable", I wish to offer a set of more or less radical proposals for putting science at the heart of our culture for the 21st century. 

Establish a Ministry of Science 

You can surely judge the significance that government attaches to any particular area of policy by the way in which it is represented in the governmental process.

Until 1992, the administration of the research councils had been firmly rooted in the Department of Education and Science. Immediately after the 1992 election, John Major announced the formation of the Office of Science and Technology, which was placed in the Cabinet Office and overseen by William Waldegrave, Chancellor of the Duchy of Lancaster, who represented science in Cabinet. These changes were in response to pressure from many quarters for greater recognition and a more direct voice for science in government. A year later the government published the first major policy document on science for 20 years, the White Paper entitled “Realising our Potential", which painted a picture of science in the service of industry. It set the scene for the Technology Foresight programme, "to inform government's decisions and priorities". This programme is aimed at identifying areas for marketable development, which the Higher Education Funding Councils and the research councils must take account of in their own funding decisions.

After the White Paper, the Advisory Council on Science and Technology was replaced by the Council for Science and Technology (CST). The role of that Council is not widely understood and it seems to lack the wide-ranging influence within government that it should have. The Dearing Report suggested that the CST should also be scrapped and reinvented. This has not happened, but it has been re-launched with a promise of greater openness.

In the reshuffle that followed the leadership contest in 1995, the OST was summarily, and apparently without consultation, booted out of the Cabinet Office and into the Department of Trade and Industry, where it still lives - a somewhat uncomfortable cuckoo in the nest of business. This unceremonious move, together with the disappearance of the ministerial committee on science and technology policy, symbolized the Tory government's perception of science. Its principle role, perhaps its only worthwhile function, was to deliver practical applications to an industrial sector most of which had a less than impressive record of investing for itself in R&D.

The scientific community welcomes the appointment of Lord Sainsbury as Minister of Science alone, which is another clear signal of the importance the new Labour government attaches to science. However, the fact that the Minister does not report directly to Cabinet and that the OST is located within the walls of the DTI limits their potential to play a really central role in government. Science is the engine of wealth creation, but it is also relevant to the work of virtually every other government department. To health and to education as well as to industry: and also to agriculture, safety, the environment, food and defence; to transport, social services, overseas development, drug control and crime prevention. Yes, and even to culture, media and sport!

The current brief of the OST has been assembled from the residue of the old structure for the management of the research councils, together with a scientific advisory role and a new and flourishing interest in the public understanding of science. These various elements do not appear to be cohesively organized. Concern has been expressed in several quarters about the location of the OST, and, in response, Bob May's trans-departmental group has recently been moved back to the Cabinet Office. This move, welcome in itself, has exacerbated the lack of cohesion in the work of the OST.

One action above all others would confirm this government's commitment to science in the 21 century. I urge Tony Blair to establish an independent Ministry or Department of Science, with a seat in Cabinet for its Minister.

Liberated from the DTI, and with broader powers, the new Ministry could establish a more coherent management structure, extend consultative and advisory links to all the other arms of government, and coordinate the whole of science policy. It could monitor government-funded research, reducing unnecessary duplication of research effort and exercising more uniform quality control.

It could set up mechanisms to integrate the several lines of scientific advice that the government receives through the departments of health, MAFF, the Chief Scientific Advisor, etc; and it could develop new ways of 'taking the pulse' of the scientific community on current scientific issues. Its important role in promoting the public understanding of science should be more closely integrated with the government's own ways of seeking and understanding scientific advice, so that the public can more effectively be kept informed of the basis of the government's thinking on scientific issues.

An independent Ministry of Science would also be better placed, and have more authority, to orchestrate the response of different departments to unexpected and urgent scientific problems. The chaotic response to the BSE crisis provides a bitter example of the present inadequacies of coordination of science policy.

More than £4 billion has already been committed to cattle slaughter and compensation - public money down the abattoir drain. That is almost twice the current annual government expenditure on the whole of science. The BSE crisis has decimated the British beef industry. It has tarnished the image of MAFF. It has badly, perhaps permanently, damaged our reputation overseas for safety controls. It may be decades before the British food industry is trusted again. Although the signs are encouraging, we still cannot be sure that there is not going to be an epidemic of human disease of biblical proportions. No event in modern times has more clearly demanded a rapid, well-planned and integrated response from all the arms of scientific funding and research, but singularly failed to receive it. A Ministry of Science with a coordinating role might - just might - have prevented the worst of this tragedy.

We must learn lessons from the BSE saga, still by no means over, as we see from today’s publicity about the possibility of the infection of sheep. We should recognize how widespread the ramifications of health and safety issues can be, spanning the work of many government departments. We must accept the inadequacies of the present ill-coordinated systems for advising government, for making public and implementing advice, and for commissioning and funding high-priority research.

I applaud the government for responding to the call for an independent Inquiry into BSE, and the open and efficient manner in which Sir Nicholas Phillips is conducting it. The findings of that Inquiry must be used to inform the new Food Standards Agency, which could be closely linked with the new Ministry of Science. 

Improving scientific advice 

The quality and independence of the scientific advice given to the UK government, through the Chief Scientific Advisor and other Departmental Chief Scientists, is high in comparison with many other countries. However, in the light of failings revealed by the BSE affair, the House of Commons Science and Technology Select Committee is currently conducting an inquiry into the scientific advisory system: this is, then, an appropriate time to speculate on how advice might be better delivered.

In a recent article, Sir Richard Southwood, reflecting on the BSE disaster and the inadequate way in which the recommendations of his committee were treated, concluded:

"Within government the Chief Scientist could have a role to follow through the interpretation of independent scientific advice and to monitor the implementation of recommendations...He (or she) could audit a 'follow-up'".

The pressing issues of the modern world, most of which have a scientific dimension, are handled by government ministers and officials, few of whom have had a formal scientific training. Never has the need for good scientific advice been greater; and the need is bound to increase. In a paper published last year, Bob May himself was also critical of the present system for collecting and assessing factual evidence, and for monitoring the way in which government uses the advice. A much more comprehensive, transparent and accountable system is needed for coordinating the advisory process. Managing such a system would be a central task for the new Ministry of Science.

I am concerned about the fact that, with the exception of situations in which special advisory committees are established (such as the Southwood Committee and the Spongiform Encephalopathy Advisory Committee), there is no explicit mechanism laid down for the government's advisers to consult broader opinion within the scientific community, on which to base their judgement, which is thus protected from the normal process of scientific challenge. Some government departments do have access to other sources of scientific advice, through their own research establishments, but it may often be difficult for government employees to be utterly dispassionate in their advice. Also, the speed and unpredictability of scientific progress make it impossible for such units always to give a properly balanced view.

What is needed is a more extensive system of consultation, which I believe that the scientific community will readily take on as part of its responsibility to society. Continuing issues, such as climate change, energy supply and environmental protection, may deserve permanent standing committees of experts. For more immediate issues, like BSE, transport policy and drought management, ad hoc committees of experts could be assembled, and disbanded when their task is complete. The expertise of professional scientific societies and the Royal Society should be harnessed in the service of government advice.

Finally, in the spirit of openness that this government has espoused, the nature of advice and the way in which it is used should be made public, except in rare cases of risk to national security.

The lack of scientific understanding among many ministers and civil servants may currently inhibit them from seeking, revealing or using scientific advice. All government departments should be compelled to refer significant policy issues to the Ministry of Science, even if they do not realise that they have a scientific dimension. The nature of any scientific advice should be disclosed to other government departments, as well as to the media, as background to ministerial statements. This might be the antidote to the kind of political machoism that provides instant, firm answers to every question - something that scientists themselves rarely do. The BSE crisis was punctuated with public assurances from ministers about the lack of risk that must have made their advisers wince.

Greater transparency in the advisory process will help to reduce oversimplification and misunderstanding. And it will mean that scientists no longer have to take the blame for inadequate responses to advice. Openness would also benefit the public understanding of science itself, since it would reveal the nature of experiments, scientific disagreement, and the concepts of risk and probability, in a context immediately relevant to current political issues.

I hope that the government will ask the British Association to play a part in promoting the spirit of openness, particularly by using the annual Festival to air subjects of topical interest and to give the public and the media the opportunity to monitor the advisory process at work. 

Science and long-term strategy 

Inevitably, the business of government is often reactive rather than strategic. An independent Ministry of Science should be given the resources and the links with other departments to help develop long-term strategies in areas for which science is relevant, including in the European and international arenas.

I can immediately suggest one urgent topic for such strategic analysis. It is the demographic time-bomb of the world's ageing population, which is, in my opinion, still not being taken sufficiently seriously. Average life expectancy in Britain has increased by a staggering 3 months for every year of this century! Of course, much of this is accounted for by a disproportionate decrease in infant mortality, but there has also been a very real increase in the average duration of adult life. By the middle of the next century, more than one in ten of the population of Britain will be over 75. Our children's children will expect to live to 100. We must, as a nation, plan now for a massive unbalancing of society, in which fewer and fewer young adults are supporting more and more of the retired. This remarkable demographic trend is testimony to the success of modern medicine in keeping most of the body going. We might imagine that, as people become more confident of a long and healthy life, many will want to retire later (a trend that is already apparent in the United States). Graduated retirement programmes, in which workload is scaled down with age, rather than terminating brutally, might not only make sense economically but also prevent some of the emotional crises that often follow abrupt retirement.

But the quality of life, as well as the ability of the elderly to continue to work effectively and to contribute in other ways to society, are so often compromised by diseases and disorders of the ageing brain and nervous system - the one organ system in the body that cannot significantly replace or repair itself. Any strategic plan for the problem of the ageing population must give the highest priority to research on the human brain, including the devastating diseases that can transform the Third Age into misery - stroke, motor neuron disease, CJD, Parkinson's Disease, Alzheimer's Disease. 

Value for money from civil science 

How did Britain sustain such a remarkable record in research during this century, and especially in the three decades following the Second World War? I believe that there are two main reasons.

The first was the favourable environment for creative freedom in British universities. The relatively generous student:staff ratio and the minimal bureaucratic burden provided an opportunity for university staff to pursue their research, free from the constraints of external direction or ear-marked funding. Job satisfaction was high, despite low salaries, and the level of productivity in research was unrivalled in any other university system except the United States.

The second reason for the success of British university research was the brilliantly simple dual-support system. The government funded university research in two ways. Core support, channelled through the old University Grants Committee - now through the Higher Education Funding Councils - and external grants to provide supplementary funding for projects requiring special equipment, additional help or expensive running costs.

During the past twenty years, British university science has suffered not only declining research funding but also a series of upheavals. I have already mentioned the peregrinations of the OST, the reorganization of the research councils and the impact of Foresight. But we have also seen the transfiguration of the Polytechnics into full universities without additional research funding, the serious erosion of the dual support system, the introduction of industrial-style appraisal for academic staff, a huge increase in paper-shuffling bureaucracy, a doubling of student:staff ratios, a halving of resources per student, and, of course, the attempt to monitor research output through the Research Assessment Exercises.

Many of these changes were offerings on the Thatcherite altar of Accountability. These were implemented by the academic community with considerable misgivings and not a little ridicule, but in the hope that if we cow-towed one more time, the God of Accountability would be appeased. The net effect of many of these changes has been to demoralize and demotivate UK researchers, and to make research in UK universities less efficient rather than more.

In the context of its new commitment to science, I urge the government to conduct a wider review of the management and appraisal of UK science.

Much has been written about the pros and cons of the dual-support system. In the quest for selectivity, some have argued for a complete shift to a US-style system, with no direct institutional funding by federal government, but substantial overheads on external grants, at a rate negotiated directly between the funding agency and the university. But many of the great American research universities are private, with huge endowments; many enjoy lavish financial support from their alumni and from philanthropic foundations. In the less wealthy US universities, the security of workshop staff, secretaries, even the janitor, is determined by the outcome of individual grant applications.

Over three years, from 1992, a substantial fraction of the Funding Councils' research budget was shifted to the research councils - the so-called DR shift. The intention was to pass more of the core support to the most productive departments. However, the money was removed from universities according to a strict formula, based on existing grant income, but was not fully returned in overheads and new categories of direct funding when grants were renewed. Research council committees that had been starved of funds for so long refused to accept many of the requests for additional direct funding (contributions to the salaries of departmental technicians, etc). Consequently, many departments, my own included, were driven into serious deficit and were forced to cut central facilities savagely, to the detriment of research.

The payment of automatic grant overheads on salaries alone has biased support in favour of labour-intensive areas of research and has encouraged researchers to overload applications with salaries rather than consumable costs. Overheads should be paid on consumables as well as salaries.

My own view, in line with that of the Dearing Report, is that we must preserve the dual support system. It is gratifying to learn, then, that the Comprehensive Spending Review increased the Research element of the HEFCs’ budget by £300 million over the coming three years, which roughly preserves the ratio of funding through the two arms of the dual support system.

1986 was the height of the Thatcher government's crusade for accountability. The great sword of the British government was raised against the evil enemy of dead wood. That was the year of the first full-scale assessment of the quality of research in UK universities, generating grades that were used to apportion the direct element of the dual support system. The subsequent series of Research Assessment Exercises has certainly flushed out complacency and focused attention on the importance of supporting the best of British research. However, the time has come to question the continuing value of RAEs, as now conducted.

The emphasis of RAEs on numerical performance indicators has fostered tactics within universities that have damaged British science. The relentless pressure to publish more has imposed a short-term perspective that discourages risky or long projects: yet these are the kinds of research that are most likely to be truly innovative. Universities are tempted to run their recruitment programmes like those of football clubs, head-hunting for productive research groups to boost performance rather than as part of a genuine strategic plan. Not that mobility and competition between universities are a bad thing, but the RAEs have distorted the sensible planning of research.

But my main criticism of RAEs concerns their cost-effectiveness. The HEFCs estimated that the 1992 RAE cost them £4 million. The cost of each RAE to the universities, especially in the time of academics and administrators, is enormously greater. After the four RAEs that have already taken place, the changes in ranking that now occur from Exercise to Exercise are generally small in magnitude and in number. In other words, huge effort and cost are being invested to discover less and less information.

I believe that we should abandon full-blown Research Assessment Exercises and concentrate on methods for discovering changes in ranking. Perhaps departments that believe that their standing has improved should be allowed to submit evidence, with a penalty for unsuccessful applications. To detect decreasing performance, an initial minimal trawl of data from all departments, say every five years, could be used to direct further analysis on departments identified as possibly being in decline.

Rather than concentrating only on rewarding the strong, I think that it is very important to have mechanisms to enable up-and-coming departments, especially in the new universities, to graduate into the research funding league. The Dearing Report proposed the allocation of 'scholarship' allowances to all members of permanent staff in lower graded departments, to encourage them to establish collaborations with other institutions and hence to develop their own research potential. This is a good idea, but the amount proposed for these allowances, £500 per annum, is inadequate to support effective collaboration. It would be good if more could be made available to unclassified departments that can demonstrate effective mechanisms for distributing the funds to the most promising members of staff.

I also want to make a point about that element of the additional money for the science budget that has been called the 'Infrastructure Fund'. This £600 million, half provided by the Wellcome Trust, is the key to renovating the fabric and facilities of British laboratories. The mechanisms for its allocation, yet to be announced, are crucial. I hope that most if not all of this money will be directed to the universities rather than other research institutions, which have been relatively protected from deterioration and obsolescence. I hope that universities will be given sufficient time to prepare well-considered applications, and that these will be judged by proper peer review. I also hope that there will be no predetermined allocation according to subject (except for whatever allowance to the biosciences is necessarily dictated by the Wellcome Trust's statutes). This is an opportunity to develop areas of science of strategic importance, not just to confirm the status quo. 

A fresh look at science education 

One of the most important functions of the new Ministry of Science, in cooperation with the Department of Education and Employment, would be to help to shape the future of science education, from primary school to the furthest reaches of lifelong learning.

In the past 30 years, school education, below Sixth Form level, has gone through a series of radical changes, sometimes looking more like a battle between the dogmas of educational fashion and of political philosophy than rational policy. To my mind, this partly reflects the problem of conducting educational research, since it is difficult if not ethically unacceptable to carry out real experiments. And the outcome of any reform is very hard to assess, partly because of the protracted time-scale of education but mainly because there are so many uncontrollable additional variables.

We know that the human brain passes through periods of particular sensitivity to certain kinds of learning - learning to walk, to remember faces, to talk, to read, to interact socially, etc. In principle, it should be possible to design an educational system that is matched to the period of sensitivity for each kind of learning task. I hesitate to make dogmatic statements in a field where opinion tends to triumph over fact, but I think that we know almost nothing about the optimal time and method for teaching, say, mathematics, history or science. However, we do know that, before the age of about 8 years, second languages are learned relatively effortlessly by most children, exploiting the natural sensitivity for language acquisition. Yet the present curriculum in this country largely delays foreign language teaching until secondary school. This partly explains why the British (but not the Welsh, of course) are so bad at foreign languages. Can we be sure that this penalty is outweighed by any advantage in concentrating on the formal teaching of mathematics, science, history, etc, in primary school?

If we cannot do proper experiments in education, we can at least look at experience elsewhere. Such comparisons do not support the new policy of concentrating on formal teaching and testing of the 3 Rs in the very first years of primary school. In some countries, such as Germany and Switzerland, which do not start formal instruction until the age of six or seven, school leavers actually perform better in tests of the 3 Rs than British children do. I applaud the efforts of the government to increase access to primary education from the age of four, which will help mothers, especially single parents, to return to work. But I believe that it would be more efficient for such young children to concentrate on 'learning to learn' and on developing group cooperation in problem solving. Britain has very high rates of teenage pregnancy and youth crime, as well as many other signs of social malaise among young people. Perhaps an emphasis on social skills in the early years of school would do more to reverse these problems than the 3 Rs.

As someone who went to school through the iniquitous period of the 11-plus, I was astonished by the recent proposal to test 4-year-olds in reading, writing and maths, even before entry to primary school, and to use these tests to stream the children. Such skills at that age could only have come from coaching at home. This testing will directly disadvantage children from less supportive home backgrounds.

But the greatest need for radical thinking is in the area of higher and further education, from Sixth Form on. The rapid increase in the number of young people moving on to higher education over the last 10 years is a great achievement, moving Britain more into line with other developed countries. But we are still working within a framework for higher education designed in the 1940s. Just three specialized subjects at A-level; the 3-year Honours degree course; and, until this year, free university fees for all. This structure worked well enough in 1961, when I left school, because only 5% of school leavers went to university. But it is under severe strain now that more than 30% do so. The introduction of the student contribution to fees is only patching up the problem of financing higher education. But because of it, even more undergraduates will live in abject poverty and will incur even larger debts. And an increasing fraction of students are not completing the course of their initial choice.

There is a growing consensus that the traditional Sixth form curriculum is too narrow, particularly now that many schools are simply unable to maintain extracurricular activities. I hope that the government will look again at the introduction of a broader curriculum, as in other European countries, with, say 3 major and 2 minor subjects, and with every Sixth former studying both arts and science in some form.

Of course, there will be knock-on effects at university level, with a need for more foundation teaching to compensate for the broadening of teaching in the Sixth Form. This will make it difficult to reach a full Honours degree level in only 3 years, and I believe that it will be necessary to move to four years for such qualifications. If this applied to every student, it would obviously put a huge additional strain on university staff and facilities, as well as on the pockets of students and their families. So, I think that we should look seriously at the introduction of a two-year Ordinary degree course, aimed at providing a well-rounded advanced education or preparation for subsequent vocational training.

Also, many young (and not so young) people would be able to benefit more from university education, and to finance themselves more easily, if course structures were more flexible, as in the United States. The trend towards teaching in modules is a good thing (as long as we can resist examining entirely by multiple choice questions). The introduction of American-style 'credits' for examinations passed, which can be accumulated and even transferred to another university, would enable students to take time off for temporary work.

Finally, it is worth pointing out an odd paradox in science education in this country, and a lesson that emerges from it. It is clear that traditional single science subjects have not shared in the huge general expansion in higher education. The welcome increase in the numbers taking science at GCSE is not feeding through to A-level choices. While the fraction of post 16-year-olds taking A-levels has tripled since 1962, the proportion taking combinations of Chemistry, Physics, Biology and Maths has actually decreased, especially in the State sector. There has been an increase in those taking mixtures of arts and sciences - a kind of do-it-yourself Baccalaureate - but only one in five of those students go on to read science-based courses at university.

Consequently, the total numbers of undergraduates reading only science subjects or maths has stayed remarkably constant, while the overall numbers have rocketed up. Just last year there was a further 2.8% fall in the number of applications to read physics. There has been an increase in Biology, and also in combined courses that include an element of science (some of them pretty arbitrary combinations). But it is clear that the huge increase in university entrance has not led to a proportionate rise in the number of science graduates, as Simon Jenkins told us this afternoon.

Should this worry us? In fact, Britain's graduate output in the sciences is not out of line with that of our major industrial competitors. Remarkably, Britain graduates relatively more people in the natural sciences, maths, computing, and even engineering than France, Japan or the United States.

The one comparison on which Britain falls down miserably is in the proportion of science graduates who secure employment as science professionals. While the fraction of the labour force employed in science and engineering R&D has risen steadily since the 1970s in Germany, Japan, France and the USA, in the UK it has actually fallen. The reasons are fairly obvious. Scientists and engineers are underpaid, compared with other professionals, far below the average for accountants and managers. And there are just not enough good jobs for scientists.

Britain is a manufacturing nation with limited natural resources: our very survival depends on our powers of invention. Yet, Britain was alone among major OECD nations in reducing its level of investment in R&D, as a percentage of GDP, since 1981. Both industry and government were guilty of this neglect. With the notable exception of the highly successful pharmaceutical, aerospace and petrochemical industries, the level of investment of British industry is between 0.5 and 1% lower than that of our main competitors - between £3 billion and £7 billion less, overall, per annum! This under-investment in innovation, combined with the millstone of the over-valued pound, has put British industry at a terrible disadvantage in international trade. Hardly surprising then that the trade gap is yawning wider than ever - £5.7 billion in the red for the first half of this year.

A new Ministry of Science, working with the DTI, should generate radical ideas to stimulate British industry to invest more in R&D, and to employ more science graduates. Why not much better tax incentives for R&D; compulsory detailed reporting of R&D expenditure in annual reports; new schemes to encourage companies to sponsor undergraduates and research students, and to employ them for periods between modular courses?

There is so much to do if this government is truly to put science where it belongs, at the head of the agenda for the next century. 

One culture? 

Chris Smith, Secretary of State for Culture, Media and Sport, opens his recent book, Creative Britain, with the following words:
"This book is about creativity. It is about the cultural ferment and imaginative heights to which creativity leads, the enormous impact that both creativity and culture have on society and the growing importance to the modern economy of Britain of all those activities and industries that spring from the creative impulse."

But the word ‘science’ appears on only two pages of Chris Smith's book - in a transcript of a brief speech that he made for the opening of an exhibition on design. As I read the book, I realised that, for most people in this country, the word ‘culture’ means something that is discussed on BBC2 after Newsnight. And that doesn't include science.

The British Association’s Festival of Science for the year 2000 will take its place in a month-long celebration of creativity in the South Kensington area, with the overall theme of “One Culture, not Two”. It's a wonderful idea to acclaim all the creative skills of Britain for the millennium, but, frankly, I am not among those who see science and the arts as essentially part of a single process of discovery.

Science is about unlocking the truth of Nature. Artistic creativity is of a different kind, aimed at stamping the identity of the creative person indelibly on the work of art. The kind of 'experiments' that artists do, like Damien Hirst's sheep, are attempts to find novel ways of engaging our senses and our cognitive processes, probing and testing the instinctive reactions of the human mind. As the Spanish philosopher, George Santayana, wrote: "An artist is a dreamer consenting to dream of the actual world".

I suspect that even more pictures of Dolly have appeared in the press than of Damien's sheep, and her image has conjured up all sorts of emotions - surprise, wonder, alarm, even fear. But those reactions were not the aim of the research. The motivation of Ian Wilmut and his colleagues was to test a particular hypothesis - about the capacity of the nucleus of a differentiated cell to initiate and orchestrate the entire process of mammalian development. And the creative value of the work is not its particular product, pretty though Dolly is: it is the principle that she reveals and the potential for application of that principle in a multitude of ways.

We judge the creative quality of art by it uniqueness, and it is devalued by reproduction; but we judge scientific creativity by the generality of its implications: reproducibility is the sine qua non of good science. 

Informing the public 

Finally a word about the public understanding of science, a cause to which the OST is strongly committed and for which this Festival is a centrepiece. It is now 15 years since our Chairman, Sir Walter Bodmer, produced his report, which led to the establishment of the Committee on the Public Understanding of Science. COPUS has raised the profile and the respectability of the public communication of science, especially by professional scientists. But we still have a long way to go. A flippant way of putting it is that, ten years ago, the British public didn't know much about science and didn't care: now, they know a little more but care a great deal. I think that the public concerns about genetically modified organisms, about food safety, about cloning, and even about the use of animals in research are a healthy sign of public engagement in national affairs, so much lacking in other areas of British life. But many of the concerns and suspicions about science are based on a lack of understanding. It is the task of the scientific community, the scientific media and, dare I say it, the new Ministry of Science, to answer the public's concerns, to provide them with the knowledge on which to make valid judgements, and to respect the fact that the people are the ultimate arbiters of how science can best serve this country in the 21st century.


September 1998