Wednesday, 30 June 2010
Book Review: The Invisible Gorilla
Chabris, C., & Simons, D. (2010). The invisible gorilla and other ways our intuition deceives us. London: HarperCollins.
Psychology is a much misunderstood discipline. If you go into a high street bookstore, you will find the psychology section stuffed with self-help manuals and in all probability located next to the section on witchcraft and the occult. This is partly the fault of the Dewey Decimal Classification, which sandwiches Psychology firmly between Philosophy and Religion. For those who regard experimental psychology as a scientific discipline with affinities to medicine and biology this is a problem, and some psychology departments have dissociated themselves from the fluffy fringes of the discipline by renaming themselves as departments of cognitive science or behavioural neuroscience. An alternative strategy is to reclaim the term psychology to refer to a serious scientific discipline by demonstrating how experimentation can illuminate mental processes and come up with both surprising and useful results. This book does just that, and it does so in an engaging and accessible style.
The book starts out with the phenomenon referred to in the title, and which the authors are best-known for, i.e. the Invisible Gorilla experiment. This has become well-known but I won't describe it in case the reader has not experienced the phenomenon. Richard Wiseman has a nice video demonstrating it. This is perhaps the most striking example of how we can deceive ourselves and be over-confident in our judgement of what we see, remember or know. In all there are six chapters, each dealing with a different 'everyday illusion' to which we are susceptible.
My personal favourites were the last two chapters, which consider why people continue to believe in notions such as the damaging effect of MMR vaccination, or the beneficial effects of brain training for the elderly. Sceptics tend to dismiss those who persist such beliefs in the face of negative evidence, and denigrate them as stupid and scientifically illiterate. Chabris and Simons, however, are interested in why scientific evidence is so often rejected and consider why it is that anecdotes so much more powerful than data, and why we are sucked in to assuming there is causation when only correlation has been demonstrated. My one disappointment was that they did not say more about the reasons for wide individual variation in people's scepticism. After a rigorously sceptical undergraduate course in experimental psychology at Oxford, I assumed that all my peers on the course would be sceptics through-and-through, but that is far from being the case: I have intelligent friends who learned all about the scientific method, just as I did, yet who now are adherents of alternative therapies or psychoanalysis. I find this deeply puzzling, but it makes me realise that the satisfaction I find in the scientific method is in part due to the fact that it resonates with the way my brain works, and there are others for whom this is not so.
In sum, I enjoyed this book for the insights it gave into how people think and reason, and for its emphasis on the need to adopt scepticism as a mind-set. Its avoidance of jargon and clear explanations give it broad appeal, and it would make an ideal text for undergraduates entering the field of experimental psychology, because it illustrates how a good experimenter thinks about evidence and designs studies to test hypotheses.
Thursday, 24 June 2010
An exciting day in the life of a scientist
or: How to kill a few hours trying to get publication quality figures out of Matlab
This is really just a boring moan; blogging as therapy.
Well, yesterday I had the excitement of getting proofs for an 'in press' article. Virtually no errors to be corrected, but what was this list of queries from the publisher? Ah, the figures. Resolution too low. Well, that should be easy to fix - I'd do it first thing. Or so I thought.
9.30 a.m.
The paper has an unusually large number of figures, eight, some in colour. All were created in Matlab and saved as .tiff format.
I was pretty proud of generating the figures in Matlab. Graphics in Matlab is a bit of a nightmare and takes some time to learn, but once learned, you can generally create figures that are more complex than those produced by the other applications that I know.
The proofs have come from Developmental Science, but they tell me the figures are too low resolution, even though I'd selected a 'no compression' option when saving them.
Coincidentally, I have another article that is under consideration by Journal of Neuroscience, who also mention their stringent requirements for figure quality, and point me to a website, Cadmus, that will explain what is required and how to do it. Oh good, I think. Someone will help walk me through how to get good quality figures.
Ha ha ha.
Cadmus has a list of programs and formats that are supported. Alas, Matlab not among them. But Adobe Illustrator is. We have a copy of that. I used to have it on my machine, but uninstalled it, because I never used it and I got fed up when graphics files defaulted to opening in it, which took ages. Tracked down the CD, reinstalled it (compact version). Right, I think, Matlab will allow me to export a file in .ai format, and then I will be OK.
Ha ha ha
9.45
I start with the simplest figure I have – a simple black and white line drawing with a couple of text labels. I save it with .ai format.
When I click on it, Adobe Illustrator tries to open it, but first tells me it has 'unrecognised fonts' (Arial?) and then says it can't open it.
OK, I think. But I can open an .eps or .tiff file in Adobe, and I can also save my Matlab figure in those formats. But once again, I get strange messages about wrong fonts, and for the .eps version, what appears on the screen is unrecognisable from the original.
I look again at what Cadmus says about Adobe Illustrator. Oh dear.
"PLEASE NOTE: When creating graphics in illustration programs such as Adobe Illustrator with the intention of outputting to an imagesetter or platesetter, it is extremely important that the person creating the illustration have a thorough understanding of the details of imaging in a prepress environment. There are an abundance of complex problems that can occur at output if paths are set up improperly, colors are indicated incorrectly, or other elements are constructed improperly. Trapping issues can also present problems if not addressed. The more complicated your illustration becomes, the greater the probability of problems at output, and therefore the need for more expertise and experience in creating the files."
Decide that I had better try another option, since I have never used Adobe Illustrator and would not recognise a platesetter if I stumbled over one.
But, I think, there is a helpful application associated with Cadmus that allows you to check your files. And I can just open my .eps file from there. Having gone through the usual round of registering, thinking of a password, getting email confirmation of the account, etc. I am in to 'Rapid Inspector'. I try opening my .tiff file. FAIL says Rapid Inspector. Resolution too low. OK, how about .eps version? Ah, says Rapid Inspector"Rapid Inspector found an image with CMYK color. CMYK color is not supported. Acceptable color space include(s): Spotcolor, Lineart, Grayscale, RGB."
But this is a black and white figure!
I spend some time in Matlab trying to sort this one out, but with no success.
My own fault, but I can't find the script that I made to generate the figure in the first place, and I will need to redo it with different fonts etc. So waste 10 mins tracking it down and resolving once again always to save my programs in sensible places with sensible names.
I go on to the web to find out how to change the colormap to gray. Re-run program, save the figure, and try it again in Rapid Inspector. It still tells me I have CMYK color.
It also complains about my fonts. "Rapid Inspector detected that some or all fonts are missing from this file. To pass inspection, all fonts must be embedded. The following fonts are not embedded: Helvetica. "
That's odd, as I was using Arial, not Helvetica. Try a few more runs of the program with different fonts. It still doesn't like my fonts.
10.45
Time to do a Google search about how to save a Matlab figure with embedded fonts.
Well, it is nice to know I am not alone, and that many others have had this problem over the years. Several complain that it is about time Matlab did something about it.
One helpful person, Oliver Woodford, has written a routine called export_fig, which is freely available:
http://www.mathworks.com/matlabcentral/fileexchange/23629
Excellent.
But, he explains, if you want to use it to create the kinds of files I need, you need to download two other applications from other sources. Fortunately, I already have the first, but the second, xpdf, is one of those applications that makes the non-geek's heart sink when you go to the download webpage and find, instead of clear instructions about what to do, a whole list of possibilities. I fear that the one I probably need ends in .tar.gz. I've tangled with these things before but can never quite remember what to do with them.
11.30
After a bit of fiddling about, I save the .tar.gz file, then try to extract the contents. A few failures as I do something wrong, and then at last I have it. But I am not sure I have it in the right place and no indication is given as to where it should be saved. I've just stuck in my Matlab program folder.
12.00
OK I should be all set, so now let's look at the examples of how to use export_fig.
Nice helpful man who wrote the script clearly has been through everything I have, and more. He writes:
"Exporting a figure from MATLAB the way you want it (hopefully the way it looks on screen), can be a real headache for the unitiated, thanks to all the settings that are required, and also due to some eccentricities (a.k.a. features and bugs) of functions such as print. The first goal of export_fig is to make transferring a plot from screen to document, just the way you expect (again, assuming that's as it appears on screen), a doddle."
This is looking more promising....
Print out the instructions – 13 pages of them.
12.30
Took a break to look at some interesting data: what I ought to be doing instead of this rubbish.
14:30
OK back to export_fig.
First attempt failed. Matlab can't find export_fig.
I need to put the script somewhere else.
OK,eventually sorted that by putting all the export_fig m files onto the Matlab folder in My Documents.
All going very well so long as I am exporting to .png format.
But I want .eps.
When I try that the program complains it needs pdftops .
SO where the hell is that?
I will have a hunt.
Found it, but it is a .cc file.
does not seem to be recognised by matlab.
So I have now spent more time on the website looking for a .m file.
Doesn't appear to be one.
Gave up and decided to try a .tiff file.
Hah! nasty bossy Rapid Inspector says PASS. Hooray! But turned out I was reading in a different file created with the same name in May.
Back to my .tiff option in export_fig
This fails resolution test, even though it is specified as max quality.
15.00
Have a cup of tea.
Back to trying the .eps option.
Can't work out how to use the xpng file.
Program stops and asks for pdftops.
I have located pdftops.m and pdftops.cc but neither seems what it wants.
As far as I can see from looking at the code, it wants an .exe file.
The web tells me that a .cc file is a C++ file.
In some desperation I tried renaming the .cc file as .exe, but that did not work.
Decide to write to the author of the script, having read all the comments on the program and found that nobody else is having problems.
Send the email. It bounces. I had mistakenly included a full stop at the end of the email address.
Try to resend to correct address: email keeps autocompleting to the address with the full stop.
After 2 tries, get into 'frequent contacts' in address book and delete entry so can now send the email.
15.35.
I need another cup of tea to calm down.
So now trying figure 2 , coloured headplot.
Already have as .tiff; it looks very nice.
Rapid inspector tells me FAIL! resolution is too low.
I try saving as .png. Get lovely looking picture.
Rapid inspector won't read it.
Try exporting from microsoft image reader to .tiff and then reading in.
Now I get:
"alpha_planes: Rapid Inspector found extra color channels within this image. Extra color channels are also known as Alpha channels. Alpha channels are not supported. Please use an image editor to remove alpha channels from this file.resolution: Rapid Inspector found a low-resolution (RGB) image (96 DPI). The minimum required resolution for this type of image is 300 DPI. "
16:30
The wonderful Oliver Woodford replied and explained patiently how to cope with the pdftops thing. I downloaded. Still did not work. Downloaded again to the location he had said his was in . It works!!
And the figures it creates are acceptable to the wretched Rapid Inspector.
Verdict
I'm really grateful for those who have produced free software that helped me deal with this.
But I am really annoyed on two counts.
First, Matlab is an expensive package. It does wonderful things and I love it to bits as a programming tool, but its graphics are not easy to use. People have been complaining for at least 2 years about the difficulty of generating high resolution output, yet nothing has been done. It should be high priority for the Matlab developers to fix this so that there is a simple command to generate this kind of output.
Second, the Journal of Neuroscience exemplifies a trend in many journals to make authors do a lot of work that would, in the old days, have been done by copy-editors and other professionals. Scientists are supposed to have skills in graphic design and programming on top of all their other accomplishments. Some journals do still accept figures in a range of formats and look after any conversion from their end. But increasingly, the onus is put on authors. There appears to be no correlation between the wealth of a journal and the amount of help it will give to authors – in fact, if there is a correlation, I suspect it is inverse. Journal of Neuroscience charges hefty fees for just submitting a paper, let alone publishing it, with added costs of $1000 per figure unless first and last authors are members of the Society for Neuroscience – we are not and we have lots of colour figures. So our grant will be spent on shoring up J. Neuroscience rather than employing a vacation student for a few weeks.
I reckon that on a 1-10 scale of geekiness I am a 6-7, and I am struggling. I am a full-time researcher with good support. I am a reasonable programmer. But I've got lots of colleagues who are trying to produce papers who are closer to a 1 or 2 on the geekiness scale, have little or no support, and are trying to fit research around busy teaching commitments. How on earth can they cope with all of this?
Sunday, 13 June 2010
Journalists and the "scientific breakthrough"
There has been some animated discussion about science journalism by journalists and bloggers relating to the Fish Oil story that featured in my 1st June blog on the Orwellian prize. In summary, Ben Goldacre wrote a piece criticising the lax journalism behind the story, and was then roundly criticised himself by Jeremy Laurance of the Independent, who felt Ben was going too far in attacking hard-pressed journalists. There is a good piece by Ed Yong that gives the background and offers a cogent analysis. Here I don't want to rehearse the arguments – I've already added a comment to Yong's blog underlining my support for Ben Goldacre. Instead I have a constructive suggestion from a scientist to science journalists about how they might do things differently when handling science stories.
This is prompted by my experiences when attending meetings of the British Science Festival. The British Science Association, which runs the festival, has a distinguished track record: founded in 1831 as the British Association for the Advancement of Science, its meetings included highlights such the first use of the term 'dinosaur' (1841), the debate on Darwinism between Huxley and Wilberforce (1860), Joule’s experiments (1840s) and the first demonstration of wireless transmission (1894).
Times, however, have changed. Science has become increasingly specialised. The British Science Festival plays a key role in fulfilling the stated goals of the Association: "connecting science with people: promoting openness about science in society and affirming science as a prime cultural force through engaging and inspiring adults and young people directly with science and technology, and their implications". It is, however, not a venue where scientists reveal to the world a hitherto unreported "breakthrough". In general, the material that is reported at the Festival will either be already published in specialist journals, or will be "work in progress". In fact, most scientists find the whole concept of the "breakthrough" suspect: most research proceeds incrementally rather than by a sudden leap forward.
The media, however, don't seem to grasp this essential point. They descend on the meeting in droves where they summon presenters to press conferences based on the one-paragraph press releases that the scientists are encouraged to supply. They don't attend the lectures. If my experience is anything to go by, they focus on at most 5% of the presentations, selecting those that they judge will make a good story of interest to the general public. Then during the week of the Festival, there is a plethora of newspaper articles which report on the selected presentations, inevitably talking about them as if the work is entirely new. Scientists then get uncomfortable as they feel their research has been "talked up" to make it into a newsworthy "breakthrough". The following week, the newspaper coverage of science reverts to its previous low level.
Now, while I can understand some of the reasons for this, it seems to me a terrific waste of an opportunity. Gathered at the Festival is a subset of Britain's scientific elite, selected in large measure because they are doing interesting research and can communicate their science to a general audience. (Inevitably there are some who are hopeless at doing this and misunderstand the nature of the audience, but they are a minority). Their work doesn't stop when the Festival comes to an end. They will return to their labs and continue doing the interesting research. So if I were a science journalist, I would go to the Festival not to frenetically write stories to be published that week, but to make contacts with researchers who could supply me with good science stories for the rest of the year. I'm sure most journalists don't have time to tour the country interviewing scientists, but if they could make a personal contact with a scientist and persuade them they want to write a serious piece, they would find most would be amenable to a future telephone interview. And instead of having to respond rapidly to a press release, journalists could take a more relaxed approach, which would reduce their stress levels and give them time to check things carefully.
I suspect that many journalists will reply to say this is all very well, but it's not news and so editors would not stand for it. Their job is not to provide the public with a general science education, but to tell people about breakthroughs. I disagree. There is far too much science for any one person to grasp, and so most scientific research will be novel to the majority of readers. The difficulty is to explain what is often technical and complex material to a lay readership. We have some excellent science journalists who are good at doing exactly that. If they could build up a network of relationships with scientists, they'd find a great many fascinating stories out there that are currently appreciated by only a minority of specialists.
Saturday, 5 June 2010
Book review: biography of Richard Doll
Smoking kills: the revolutionary life of Richard Doll by Conrad Keating
Signal Books: Oxford, 2009
I only once had the opportunity of hearing Richard Doll speak. It was in 2004, after a hard-hitting lecture by Charles Warlow on over-regulation of research (http://tiny.cc/m1n7v). Doll, aged 92, looking physically frail but mentally alert, stood up to point out that under current ethics regulation, none of the epidemiological studies that he had done to demonstrate a link between smoking and cancer would have been possible. The only way to rescue epidemiology from the trend for increasingly strict and bureaucratic constraints, he felt, was for someone to deliberately flout the law and be prepared to go to prison for the cause of liberating research. "I am prepared to volunteer myself for that role", he said, prompting a spontaneous outburst of applause.
This engaging biography of Doll paints a picture of a man who could have been trusted to follow through on this offer. But as well as telling us about the remarkable life of Doll himself, the book is fascinating for its account of the history of evidence-based research in medicine. Doll would have gone to Cambridge to read mathematics were it not for the fact that he flunked one of the examinations after being plied with strong alcohol the night before. That fateful event led him to study medicine at St Thomas's Hospital, but he never lost his love of mathematics, and was quick to see the potential for using statistical methods and randomisation to evaluate treatments. Together with Austin Bradford Hill he developed the method of randomised controlled medical trials which has become the cornerstone for evaluation of efficacy of treatments. He subsequently took up a post researching disease prevention, a niche that not only suited him well, but was also one of the few options open to him, as his membership of the Communist party made him virtually unemployable as a hospital doctor. Remarkably, Home Office regulations meant he was also banned from teaching undergraduates, although he was allowed to teach postgraduate students.
The account of his seminal work on smoking and cancer was a revelation to me. I had no idea that in the 1950s it was commonplace for doctors to offer a cigarette to a patient to calm them down before an examination. Nor that there had been a six-fold increase in lung cancer mortality in males between 1930 and 1944. Many of the discussions around this epidemic are reminiscent of those currently seen in the field of autism – was there some factor that could explain the increase, or was it simply that detection of the disease had improved, especially with new X-ray methods for visualising tumours? Doll was remarkable for his single-mindedness and energy in assessing large numbers of patients, and for his ability to see the need for controls. He was ever alert to the possibility of misinterpreting data and developed methods of guarding against unwitting bias. Calculations were done by hand and painstakingly checked. Doll was concerned not to publish until certain of his facts, leading to some delay in publication of the first results. Keating emphasises the extent to which Doll felt it important not to become a campaigner for public health measures, but merely to present the results of his findings. It is a view nicely summed up by a comment attributed not to Doll, but to his collaborator Hill, who was confronted at a cocktail party by a doctor who said "You're the chap who wants us to stop smoking." "Not at all," replied Hill. "I'm interested if you go on smoking to see how you die."
As each new study by Doll found further support for a strong link between smoking and lung cancer, and also revealed an unexpected link with heart disease, initial apathy towards the results turned into acceptance by the Medical Research Council, criticism by the tobacco industry, who derided the findings as 'unscientific', and alarm by the government, who made substantial sums in taxes on smoking. One of the harshest critics was the eminent statistician and heavy smoker R. A. Fisher. Fisher was exercised by a key limitation of epidemiological studies, the lack of experimental control. "Correlation does not mean causation" was his mantra, and he argued forcefully for alternative explanations of the association. The debate turned acromonious, with Fisher accusing Doll and Hill of failure to make their data available, and others accusing Fisher of conflict of interest, owing to his lucrative consultancies with tobacco companies. Again, aspects of the debate have some modern resonances: in my own field of neuropsychology, it is often argued that the goal is to find the necessary and sufficient cause of a disorder, much as Fisher had argued. On this view even a single case of lung cancer in a non-smoker could overturn a causal theory. But gradually it became clear that a factor such as smoking could act as a risk factor for disease without acting in a deterministic fashion. In my own writings on developmental neuropsychology I have used the example smoking and cancer to make the point that causal mechanisms may be multifactorial and probabilistic (http://tiny.cc/i8x5l), but until I read this book I had not realised that there was a bitter debate around this very issue in the 1950s. Keating describes it as a shift from reliance on Koch's postulates to Hill's specification of criteria for causality. Koch, working on infectious diseases had specified that, among other things, to prove an organism was the cause of a disease the organism had to be discoverable in every instance of the disease, and that experimental animals could be given the disease by inoculation with a pure culture. Hill's criteria involved nine viewpoints, none of which on its own gave indisuptable evidence for a causal effect, but which taken together would help determine whether causality was a reasonable hypothesis: strength, consistency, temporality, biological gradient, plausibility, coherence, experiment and analogy.
Doll is best-known for his work on smoking, but he also played a pivotal role in discovering the health risks of asbestos, ionising radiation and the contraceptive pill. Keating tackles head-on the accusations that emerged after Doll's death that he was in the pay of companies such as ICI and Monsanto, and refutes firmly any suggestion of malpractice. Doll believed that if one wanted to obtain access to data it was important to have some relationship with companies. This emerged most strikingly in his early work on asbestos, where his preliminary findings caused shock and alarm. He resisted attempts to suppress the data, but negotiated with the company where he had done his studies with the aim of doing further analyses. The company were willing to implement changes to their working practices and Doll judged that more good could be done by working with them than attacking them: after Doll's findings, measures for containing asbestos fibres were taken so that the workers could at last see that there was a clock on the far-side wall. Doll made no personal profit from this company, nor from his work for chemical firms, whose fees were donated to charity. Nevertheless, the book raises big questions about how to handle potential conflict of interest and illustrates the importance of modern practices of full disclosure – which only became commonplace after his retirement, and of which Doll fully approved.
I can thoroughly recommend this book as providing a stimulating account of a fascinating life. It puts into historical perspective aspects of medical statistics and epidemiology that are now largely taken for granted, but which may have taken much longer to be developed without Doll's intelligence and dedication.
A final word of thanks to the Book House, Summertown, the small, independent bookseller where I happened upon this book. Please support them if you live in Oxford: I can guarantee that if you wander in and have a browse, you will find something interesting and unexpected on the shelves. (And no, I don't have any vested interests – just keen for the survival of a local amenity!)
Tuesday, 1 June 2010
Orwellian prize for journalistic misrepresentation
CALL FOR NOMINATIONS
I am offering a prize each year for an article in an English-language national newspaper that has the most inaccurate report of a piece of academic work.
The prize will consist of a certificate and statuette and I would welcome suggestions for the design of both of these
.
The prize will be awarded in January of each year.
Rules
1. The article must purport to report results of academic research, and judgement will be based on a points scoring system, as follows:
* Factual error in the title: 3 points
* Factual error in a subtitle: 2 points
* Factual error in the body of the article: 1 point
2. Factual errors must be ones that can be judged against publicly available documents – i.e. not just opinions or reports of interviews.
3. Nominations must be posted on this blog. The nomination should contain:
* Web addresses for both the nominated article and the academic source that is misrepresented.
* Name and email contact of the nominator. Anonymous nominations are not allowed
* A scored copy of the article, as illustrated below
If a nominated article is not available electronically, then the nominator should provide a list of the points used to score the article, and retain a photocopy of the article, which should be provided to the judges on request.
4. If there is more than one plausible candidate for the prize, then additional criteria will be used, such as:
* The seriousness of the error, e.g. could it damage vulnerable groups?
* Relevant undisclosed vested interests by journalist or his/her newspaper
* The ratio of accurate to inaccurate content
* The presence of irrelevant but misleading content
* The size of the readership
and mitigating circumstances, such as
* Whether there was a misleading press release from the academic's institution
* Whether a scientist colluded in 'talking up' the findings and going beyond data
Nominators are encouraged to comment on these points also, but final judgement will be made by a panel of judges.
Illustrative example of nomination:
The following article is a strong contender for the prize in 2010, and illustrates the scoring system.
Inaccurate sections shown in square brackets with N points in round brackets.
Nominated article: http://www.guardian.co.uk/science/2010/may/30/fish-oil-supplement-concentration
Misrepresented source article: http://www.ajcn.org/cgi/content/abstract/91/4/1060
Nominator: DVM Bishop, dorothy.bishop@psy.ox.ac.uk
[Fish oil](3) helps schoolchildren [to concentrate](3)
US academics discover [high doses of omega-3 fish oil](2) combat [hyperactivity and attention deficit disorder](2)
* Denis Campbell
* The Observer, Sunday 30 May 2010
Children [can learn better at school](1) by taking [omega-3 fish oil supplements](1) which [boost their concentration](1), scientists say.
Boys aged eight to 11 who were given doses [once or twice a day](1) of docosahexaenoic acid, an essential fatty acid known as DHA, showed [big improvements in their performance during tasks involving attention](1).
Dr Robert McNamara, of the University of Cincinnati, who led the team of American researchers, said their findings could help pupils to study more effectively and potentially help to tackle both attention deficit hyperactivity disorder (ADHD) and depression. The study, reported in the American Journal of Clinical Nutrition, is important because a lack of DHA has been implicated in ADHD and other similar conditions, with poor maternal diet sometimes blamed for the child's deficiency.
ADHD affects an estimated 4%-8% of Britons and can seriously impair a child's education because they have trouble concentrating and are often disruptive in class. A lack of DHA has also been associated with bipolar disorder and schizophrenia.
"We found that, if you take DHA, you can enhance the function of those brain regions that are involved in paying attention, so it helps people concentrate," said McNamara. "The benefit is that it may represent an intervention that will help children or adults with attention impairments."
The researchers gave 33 US schoolboys 400mg or 1,200mg doses of DHA or a placebo every day for eight weeks. [Those who had received the high doses did much better in mental tasks involving mathematical challenges](1). Brain scans showed that functional activity in their frontal cortex – which controls memory, attention and the ability to plan – increased significantly.
The results, and fact that many people eat too little fish to get enough DHA through their diet, meant it could help all children to improve their learning, added McNamara. "The primary benefit is to treat ADHD and depression, but it could also help people with their memory, learning and attention," he said.
I am offering a prize each year for an article in an English-language national newspaper that has the most inaccurate report of a piece of academic work.
The prize will consist of a certificate and statuette and I would welcome suggestions for the design of both of these
.
The prize will be awarded in January of each year.
Rules
1. The article must purport to report results of academic research, and judgement will be based on a points scoring system, as follows:
* Factual error in the title: 3 points
* Factual error in a subtitle: 2 points
* Factual error in the body of the article: 1 point
2. Factual errors must be ones that can be judged against publicly available documents – i.e. not just opinions or reports of interviews.
3. Nominations must be posted on this blog. The nomination should contain:
* Web addresses for both the nominated article and the academic source that is misrepresented.
* Name and email contact of the nominator. Anonymous nominations are not allowed
* A scored copy of the article, as illustrated below
If a nominated article is not available electronically, then the nominator should provide a list of the points used to score the article, and retain a photocopy of the article, which should be provided to the judges on request.
4. If there is more than one plausible candidate for the prize, then additional criteria will be used, such as:
* The seriousness of the error, e.g. could it damage vulnerable groups?
* Relevant undisclosed vested interests by journalist or his/her newspaper
* The ratio of accurate to inaccurate content
* The presence of irrelevant but misleading content
* The size of the readership
and mitigating circumstances, such as
* Whether there was a misleading press release from the academic's institution
* Whether a scientist colluded in 'talking up' the findings and going beyond data
Nominators are encouraged to comment on these points also, but final judgement will be made by a panel of judges.
Illustrative example of nomination:
The following article is a strong contender for the prize in 2010, and illustrates the scoring system.
Inaccurate sections shown in square brackets with N points in round brackets.
Nominated article: http://www.guardian.co.uk/science/2010/may/30/fish-oil-supplement-concentration
Misrepresented source article: http://www.ajcn.org/cgi/content/abstract/91/4/1060
Nominator: DVM Bishop, dorothy.bishop@psy.ox.ac.uk
[Fish oil](3) helps schoolchildren [to concentrate](3)
US academics discover [high doses of omega-3 fish oil](2) combat [hyperactivity and attention deficit disorder](2)
* Denis Campbell
* The Observer, Sunday 30 May 2010
Children [can learn better at school](1) by taking [omega-3 fish oil supplements](1) which [boost their concentration](1), scientists say.
Boys aged eight to 11 who were given doses [once or twice a day](1) of docosahexaenoic acid, an essential fatty acid known as DHA, showed [big improvements in their performance during tasks involving attention](1).
Dr Robert McNamara, of the University of Cincinnati, who led the team of American researchers, said their findings could help pupils to study more effectively and potentially help to tackle both attention deficit hyperactivity disorder (ADHD) and depression. The study, reported in the American Journal of Clinical Nutrition, is important because a lack of DHA has been implicated in ADHD and other similar conditions, with poor maternal diet sometimes blamed for the child's deficiency.
ADHD affects an estimated 4%-8% of Britons and can seriously impair a child's education because they have trouble concentrating and are often disruptive in class. A lack of DHA has also been associated with bipolar disorder and schizophrenia.
"We found that, if you take DHA, you can enhance the function of those brain regions that are involved in paying attention, so it helps people concentrate," said McNamara. "The benefit is that it may represent an intervention that will help children or adults with attention impairments."
The researchers gave 33 US schoolboys 400mg or 1,200mg doses of DHA or a placebo every day for eight weeks. [Those who had received the high doses did much better in mental tasks involving mathematical challenges](1). Brain scans showed that functional activity in their frontal cortex – which controls memory, attention and the ability to plan – increased significantly.
The results, and fact that many people eat too little fish to get enough DHA through their diet, meant it could help all children to improve their learning, added McNamara. "The primary benefit is to treat ADHD and depression, but it could also help people with their memory, learning and attention," he said.
--------------------------------------------------------------------------------------------------------------------
12th September 2010
New nomination by Jon Simons, who has pointed out to me that there is a word limit on Comments, which makes it difficult to post nominations there. If you have a nomination, please save as a text file and send to me (email above) and I can post it here.
Nominated article:
http://www.washingtonpost.com/wp-dyn/content/article/2010/09/09/AR2010090904116.html
Original article: http://www.sciencemag.org/cgi/content/abstract/329/5997/1358
Nominator: Jon Simons (jss30@cam.ac.uk)
Scientists can [scan brains for maturity](3), potentially [gauging child
development](3)
By Rob Stein, Washington Post Staff Writer, Thursday, September 9, 2010; 6:13 PM
Scientists have developed a scan that can [measure the maturity of the brain](1), an advance that someday might be useful for [testing whether children are maturing normally](1) and for [gauging whether teenagers are grown-up enough to be treated as adults](1).
A federally funded study that involved [scanning more than 12,000 connections](1) in the brains of 238 volunteers ages 7 to 30 found that the technique appeared to accurately differentiate between the brains of adults and children and determine roughly where individuals scored in the normal trajectory of brain development.
While much more work is needed to validate and refine the test, the technique could have a host of uses, including [providing another way to make sure children's brains are developing properly](1), [in the same way doctors routinely measure other developmental milestones](1). [The scan could, for example, identify children who might be at risk for autism, schizophrenia and other problems because their brains are not maturing normally](1).
"If you are worried about a kid's development, in five minutes you could do a scan and it would spit out a measurement of their brain maturity level," said Nico Dosenbach, a pediatric neurology resident at St. Louis Children's Hospital who helped develop the technique described in Friday's issue of the journal Science. "That's sort of the future."
The technique developed by Dosenbach and his colleagues uses magnetic resonance imaging, already commonly used to measure activity in the brain by correlating increases and decreases in [blood flow](1) to various brain regions. The scans are considered safe because they do not use radiation.
In this case, the technique was called functional connectivity magnetic resonance imaging, or fcMRI, because it [measured connections](1) in the resting brains of the subjects. The researchers used a computer program to analyze how [connections in the brain changed as the mind matured](1), pinpointing 200 to produce an index of maturity. They found that [close connection weakened while distant connections strengthened as the brain matures](1), until about age 21 or 22.
Dosenbach estimated they were able to distinguish between the brain of children ages 7 to 11 and that of adults ages 25 to 30 with 90 percent accuracy. They were able to differentiate between adolescents and adults with 75 percent accuracy, Dosenbach said in an e-mail.
But Dosenbach warned that it would be premature to start using the technique to measure individual maturity levels.
"I would not endorse that," he said.
Subscribe to:
Posts (Atom)