Friday, 7 August 2020

Bishopblog catalogue (updated 7 August 2020)

Source: http://www.weblogcartoons.com/2008/11/23/ideas/

Those of you who follow this blog may have noticed a lack of thematic coherence. I write about whatever is exercising my mind at the time, which can range from technical aspects of statistics to the design of bathroom taps. I decided it might be helpful to introduce a bit of order into this chaotic melange, so here is a catalogue of posts by topic.

Language impairment, dyslexia and related disorders
The common childhood disorders that have been left out in the cold (1 Dec 2010) What's in a name? (18 Dec 2010) Neuroprognosis in dyslexia (22 Dec 2010) Where commercial and clinical interests collide: Auditory processing disorder (6 Mar 2011) Auditory processing disorder (30 Mar 2011) Special educational needs: will they be met by the Green paper proposals? (9 Apr 2011) Is poor parenting really to blame for children's school problems? (3 Jun 2011) Early intervention: what's not to like? (1 Sep 2011) Lies, damned lies and spin (15 Oct 2011) A message to the world (31 Oct 2011) Vitamins, genes and language (13 Nov 2011) Neuroscientific interventions for dyslexia: red flags (24 Feb 2012) Phonics screening: sense and sensibility (3 Apr 2012) What Chomsky doesn't get about child language (3 Sept 2012) Data from the phonics screen (1 Oct 2012) Auditory processing disorder: schisms and skirmishes (27 Oct 2012) High-impact journals (Action video games and dyslexia: critique) (10 Mar 2013) Overhyped genetic findings: the case of dyslexia (16 Jun 2013) The arcuate fasciculus and word learning (11 Aug 2013) Changing children's brains (17 Aug 2013) Raising awareness of language learning impairments (26 Sep 2013) Good and bad news on the phonics screen (5 Oct 2013) What is educational neuroscience? (25 Jan 2014) Parent talk and child language (17 Feb 2014) My thoughts on the dyslexia debate (20 Mar 2014) Labels for unexplained language difficulties in children (23 Aug 2014) International reading comparisons: Is England really do so poorly? (14 Sep 2014) Our early assessments of schoolchildren are misleading and damaging (4 May 2015) Opportunity cost: a new red flag for evaluating interventions (30 Aug 2015) The STEP Physical Literacy programme: have we been here before? (2 Jul 2017) Prisons, developmental language disorder, and base rates (3 Nov 2017) Reproducibility and phonics: necessary but not sufficient (27 Nov 2017) Developmental language disorder: the need for a clinically relevant definition (9 Jun 2018) Changing terminology for children's language disorders (23 Feb 2020) Developmental Language Disorder (DLD) in relaton to DSM5 (29 Feb 2020)

Autism
Autism diagnosis in cultural context (16 May 2011) Are our ‘gold standard’ autism diagnostic instruments fit for purpose? (30 May 2011) How common is autism? (7 Jun 2011) Autism and hypersystematising parents (21 Jun 2011) An open letter to Baroness Susan Greenfield (4 Aug 2011) Susan Greenfield and autistic spectrum disorder: was she misrepresented? (12 Aug 2011) Psychoanalytic treatment for autism: Interviews with French analysts (23 Jan 2012) The ‘autism epidemic’ and diagnostic substitution (4 Jun 2012) How wishful thinking is damaging Peta's cause (9 June 2014) NeuroPointDX's blood test for Autism Spectrum Disorder ( 12 Jan 2019)

Developmental disorders/paediatrics
The hidden cost of neglected tropical diseases (25 Nov 2010) The National Children's Study: a view from across the pond (25 Jun 2011) The kids are all right in daycare (14 Sep 2011) Moderate drinking in pregnancy: toxic or benign? (21 Nov 2012) Changing the landscape of psychiatric research (11 May 2014) The sinister side of French psychoanalysis revealed (15 Oct 2019)

Genetics
Where does the myth of a gene for things like intelligence come from? (9 Sep 2010) Genes for optimism, dyslexia and obesity and other mythical beasts (10 Sep 2010) The X and Y of sex differences (11 May 2011) Review of How Genes Influence Behaviour (5 Jun 2011) Getting genetic effect sizes in perspective (20 Apr 2012) Moderate drinking in pregnancy: toxic or benign? (21 Nov 2012) Genes, brains and lateralisation (22 Dec 2012) Genetic variation and neuroimaging (11 Jan 2013) Have we become slower and dumber? (15 May 2013) Overhyped genetic findings: the case of dyslexia (16 Jun 2013) Incomprehensibility of much neurogenetics research ( 1 Oct 2016) A common misunderstanding of natural selection (8 Jan 2017) Sample selection in genetic studies: impact of restricted range (23 Apr 2017) Pre-registration or replication: the need for new standards in neurogenetic studies (1 Oct 2017) Review of 'Innate' by Kevin Mitchell ( 15 Apr 2019) Why eugenics is wrong (18 Feb 2020)

Neuroscience
Neuroprognosis in dyslexia (22 Dec 2010) Brain scans show that… (11 Jun 2011)  Time for neuroimaging (and PNAS) to clean up its act (5 Mar 2012) Neuronal migration in language learning impairments (2 May 2012) Sharing of MRI datasets (6 May 2012) Genetic variation and neuroimaging (1 Jan 2013) The arcuate fasciculus and word learning (11 Aug 2013) Changing children's brains (17 Aug 2013) What is educational neuroscience? ( 25 Jan 2014) Changing the landscape of psychiatric research (11 May 2014) Incomprehensibility of much neurogenetics research ( 1 Oct 2016)

Reproducibility
Accentuate the negative (26 Oct 2011) Novelty, interest and replicability (19 Jan 2012) High-impact journals: where newsworthiness trumps methodology (10 Mar 2013) Who's afraid of open data? (15 Nov 2015) Blogging as post-publication peer review (21 Mar 2013) Research fraud: More scrutiny by administrators is not the answer (17 Jun 2013) Pressures against cumulative research (9 Jan 2014) Why does so much research go unpublished? (12 Jan 2014) Replication and reputation: Whose career matters? (29 Aug 2014) Open code: note just data and publications (6 Dec 2015) Why researchers need to understand poker ( 26 Jan 2016) Reproducibility crisis in psychology ( 5 Mar 2016) Further benefit of registered reports ( 22 Mar 2016) Would paying by results improve reproducibility? ( 7 May 2016) Serendipitous findings in psychology ( 29 May 2016) Thoughts on the Statcheck project ( 3 Sep 2016) When is a replication not a replication? (16 Dec 2016) Reproducible practices are the future for early career researchers (1 May 2017) Which neuroimaging measures are useful for individual differences research? (28 May 2017) Prospecting for kryptonite: the value of null results (17 Jun 2017) Pre-registration or replication: the need for new standards in neurogenetic studies (1 Oct 2017) Citing the research literature: the distorting lens of memory (17 Oct 2017) Reproducibility and phonics: necessary but not sufficient (27 Nov 2017) Improving reproducibility: the future is with the young (9 Feb 2018) Sowing seeds of doubt: how Gilbert et al's critique of the reproducibility project has played out (27 May 2018) Preprint publication as karaoke ( 26 Jun 2018) Standing on the shoulders of giants, or slithering around on jellyfish: Why reviews need to be systematic ( 20 Jul 2018) Matlab vs open source: costs and benefits to scientists and society ( 20 Aug 2018) Responding to the replication crisis: reflections on Metascience 2019 (15 Sep 2019) Manipulated images: hiding in plain sight (13 May 2020) Frogs or termites: gunshot or cumulative science? ( 6 Jun 2020)  

Statistics
Book review: biography of Richard Doll (5 Jun 2010) Book review: the Invisible Gorilla (30 Jun 2010) The difference between p < .05 and a screening test (23 Jul 2010) Three ways to improve cognitive test scores without intervention (14 Aug 2010) A short nerdy post about the use of percentiles (13 Apr 2011) The joys of inventing data (5 Oct 2011) Getting genetic effect sizes in perspective (20 Apr 2012) Causal models of developmental disorders: the perils of correlational data (24 Jun 2012) Data from the phonics screen (1 Oct 2012)Moderate drinking in pregnancy: toxic or benign? (1 Nov 2012) Flaky chocolate and the New England Journal of Medicine (13 Nov 2012) Interpreting unexpected significant results (7 June 2013) Data analysis: Ten tips I wish I'd known earlier (18 Apr 2014) Data sharing: exciting but scary (26 May 2014) Percentages, quasi-statistics and bad arguments (21 July 2014) Why I still use Excel ( 1 Sep 2016) Sample selection in genetic studies: impact of restricted range (23 Apr 2017) Prospecting for kryptonite: the value of null results (17 Jun 2017) Prisons, developmental language disorder, and base rates (3 Nov 2017) How Analysis of Variance Works (20 Nov 2017) ANOVA, t-tests and regression: different ways of showing the same thing (24 Nov 2017) Using simulations to understand the importance of sample size (21 Dec 2017) Using simulations to understand p-values (26 Dec 2017) One big study or two small studies? ( 12 Jul 2018)

Journalism/science communication
Orwellian prize for scientific misrepresentation (1 Jun 2010) Journalists and the 'scientific breakthrough' (13 Jun 2010) Science journal editors: a taxonomy (28 Sep 2010) Orwellian prize for journalistic misrepresentation: an update (29 Jan 2011) Academic publishing: why isn't psychology like physics? (26 Feb 2011) Scientific communication: the Comment option (25 May 2011)  Publishers, psychological tests and greed (30 Dec 2011) Time for academics to withdraw free labour (7 Jan 2012) 2011 Orwellian Prize for Journalistic Misrepresentation (29 Jan 2012) Time for neuroimaging (and PNAS) to clean up its act (5 Mar 2012) Communicating science in the age of the internet (13 Jul 2012) How to bury your academic writing (26 Aug 2012) High-impact journals: where newsworthiness trumps methodology (10 Mar 2013)  A short rant about numbered journal references (5 Apr 2013) Schizophrenia and child abuse in the media (26 May 2013) Why we need pre-registration (6 Jul 2013) On the need for responsible reporting of research (10 Oct 2013) A New Year's letter to academic publishers (4 Jan 2014) Journals without editors: What is going on? (1 Feb 2015) Editors behaving badly? (24 Feb 2015) Will Elsevier say sorry? (21 Mar 2015) How long does a scientific paper need to be? (20 Apr 2015) Will traditional science journals disappear? (17 May 2015) My collapse of confidence in Frontiers journals (7 Jun 2015) Publishing replication failures (11 Jul 2015) Psychology research: hopeless case or pioneering field? (28 Aug 2015) Desperate marketing from J. Neuroscience ( 18 Feb 2016) Editorial integrity: publishers on the front line ( 11 Jun 2016) When scientific communication is a one-way street (13 Dec 2016) Breaking the ice with buxom grapefruits: Pratiques de publication and predatory publishing (25 Jul 2017) Should editors edit reviewers? ( 26 Aug 2018) Corrigendum: a word you may hope never to encounter (3 Aug 2019) Percent by most prolific author score and editorial bias (12 Jul 2020)

Social Media
A gentle introduction to Twitter for the apprehensive academic (14 Jun 2011) Your Twitter Profile: The Importance of Not Being Earnest (19 Nov 2011) Will I still be tweeting in 2013? (2 Jan 2012) Blogging in the service of science (10 Mar 2012) Blogging as post-publication peer review (21 Mar 2013) The impact of blogging on reputation ( 27 Dec 2013) WeSpeechies: A meeting point on Twitter (12 Apr 2014) Email overload ( 12 Apr 2016) How to survive on Twitter - a simple rule to reduce stress (13 May 2018)

Academic life
An exciting day in the life of a scientist (24 Jun 2010) How our current reward structures have distorted and damaged science (6 Aug 2010) The challenge for science: speech by Colin Blakemore (14 Oct 2010) When ethics regulations have unethical consequences (14 Dec 2010) A day working from home (23 Dec 2010) Should we ration research grant applications? (8 Jan 2011) The one hour lecture (11 Mar 2011) The expansion of research regulators (20 Mar 2011) Should we ever fight lies with lies? (19 Jun 2011) How to survive in psychological research (13 Jul 2011) So you want to be a research assistant? (25 Aug 2011) NHS research ethics procedures: a modern-day Circumlocution Office (18 Dec 2011) The REF: a monster that sucks time and money from academic institutions (20 Mar 2012) The ultimate email auto-response (12 Apr 2012) Well, this should be easy…. (21 May 2012) Journal impact factors and REF2014 (19 Jan 2013)  An alternative to REF2014 (26 Jan 2013) Postgraduate education: time for a rethink (9 Feb 2013)  Ten things that can sink a grant proposal (19 Mar 2013)Blogging as post-publication peer review (21 Mar 2013) The academic backlog (9 May 2013)  Discussion meeting vs conference: in praise of slower science (21 Jun 2013) Why we need pre-registration (6 Jul 2013) Evaluate, evaluate, evaluate (12 Sep 2013) High time to revise the PhD thesis format (9 Oct 2013) The Matthew effect and REF2014 (15 Oct 2013) The University as big business: the case of King's College London (18 June 2014) Should vice-chancellors earn more than the prime minister? (12 July 2014)  Some thoughts on use of metrics in university research assessment (12 Oct 2014) Tuition fees must be high on the agenda before the next election (22 Oct 2014) Blaming universities for our nation's woes (24 Oct 2014) Staff satisfaction is as important as student satisfaction (13 Nov 2014) Metricophobia among academics (28 Nov 2014) Why evaluating scientists by grant income is stupid (8 Dec 2014) Dividing up the pie in relation to REF2014 (18 Dec 2014)  Shaky foundations of the TEF (7 Dec 2015) A lamentable performance by Jo Johnson (12 Dec 2015) More misrepresentation in the Green Paper (17 Dec 2015) The Green Paper’s level playing field risks becoming a morass (24 Dec 2015) NSS and teaching excellence: wrong measure, wrongly analysed (4 Jan 2016) Lack of clarity of purpose in REF and TEF ( 2 Mar 2016) Who wants the TEF? ( 24 May 2016) Cost benefit analysis of the TEF ( 17 Jul 2016)  Alternative providers and alternative medicine ( 6 Aug 2016) We know what's best for you: politicians vs. experts (17 Feb 2017) Advice for early career researchers re job applications: Work 'in preparation' (5 Mar 2017) Should research funding be allocated at random? (7 Apr 2018) Power, responsibility and role models in academia (3 May 2018) My response to the EPA's 'Strengthening Transparency in Regulatory Science' (9 May 2018) More haste less speed in calls for grant proposals ( 11 Aug 2018) Has the Society for Neuroscience lost its way? ( 24 Oct 2018) The Paper-in-a-Day Approach ( 9 Feb 2019) Benchmarking in the TEF: Something doesn't add up ( 3 Mar 2019) The Do It Yourself conference ( 26 May 2019) A call for funders to ban institutions that use grant capture targets (20 Jul 2019) Research funders need to embrace slow science (1 Jan 2020) Should I stay or should I go: When debate with opponents should be avoided (12 Jan 2020) Stemming the flood of illegal external examiners (9 Feb 2020) What can scientists do in an emergency shutdown? (11 Mar 2020) Stepping back a level: Stress management for academics in the pandemic (2 May 2020)
TEF in the time of pandemic (27 Jul 2020)

Celebrity scientists/quackery
Three ways to improve cognitive test scores without intervention (14 Aug 2010) What does it take to become a Fellow of the RSM? (24 Jul 2011) An open letter to Baroness Susan Greenfield (4 Aug 2011) Susan Greenfield and autistic spectrum disorder: was she misrepresented? (12 Aug 2011) How to become a celebrity scientific expert (12 Sep 2011) The kids are all right in daycare (14 Sep 2011)  The weird world of US ethics regulation (25 Nov 2011) Pioneering treatment or quackery? How to decide (4 Dec 2011) Psychoanalytic treatment for autism: Interviews with French analysts (23 Jan 2012) Neuroscientific interventions for dyslexia: red flags (24 Feb 2012) Why most scientists don't take Susan Greenfield seriously (26 Sept 2014) NeuroPointDX's blood test for Autism Spectrum Disorder ( 12 Jan 2019)

Women
Academic mobbing in cyberspace (30 May 2010) What works for women: some useful links (12 Jan 2011) The burqua ban: what's a liberal response (21 Apr 2011) C'mon sisters! Speak out! (28 Mar 2012) Psychology: where are all the men? (5 Nov 2012) Should Rennard be reinstated? (1 June 2014) How the media spun the Tim Hunt story (24 Jun 2015)

Politics and Religion
Lies, damned lies and spin (15 Oct 2011) A letter to Nick Clegg from an ex liberal democrat (11 Mar 2012) BBC's 'extensive coverage' of the NHS bill (9 Apr 2012) Schoolgirls' health put at risk by Catholic view on vaccination (30 Jun 2012) A letter to Boris Johnson (30 Nov 2013) How the government spins a crisis (floods) (1 Jan 2014) The alt-right guide to fielding conference questions (18 Feb 2017) We know what's best for you: politicians vs. experts (17 Feb 2017) Barely a good word for Donald Trump in Houses of Parliament (23 Feb 2017) Do you really want another referendum? Be careful what you wish for (12 Jan 2018) My response to the EPA's 'Strengthening Transparency in Regulatory Science' (9 May 2018) What is driving Theresa May? ( 27 Mar 2019) A day out at 10 Downing St (10 Aug 2019) Voting in the EU referendum: Ignorance, deceit and folly ( 8 Sep 2019) Harry Potter and the Beast of Brexit (20 Oct 2019) Attempting to communicate with the BBC (8 May 2020) Boris bingo: strategies for (not) answering questions (29 May 2020)

Humour and miscellaneous Orwellian prize for scientific misrepresentation (1 Jun 2010) An exciting day in the life of a scientist (24 Jun 2010) Science journal editors: a taxonomy (28 Sep 2010) Parasites, pangolins and peer review (26 Nov 2010) A day working from home (23 Dec 2010) The one hour lecture (11 Mar 2011) The expansion of research regulators (20 Mar 2011) Scientific communication: the Comment option (25 May 2011) How to survive in psychological research (13 Jul 2011) Your Twitter Profile: The Importance of Not Being Earnest (19 Nov 2011) 2011 Orwellian Prize for Journalistic Misrepresentation (29 Jan 2012) The ultimate email auto-response (12 Apr 2012) Well, this should be easy…. (21 May 2012) The bewildering bathroom challenge (19 Jul 2012) Are Starbucks hiding their profits on the planet Vulcan? (15 Nov 2012) Forget the Tower of Hanoi (11 Apr 2013) How do you communicate with a communications company? ( 30 Mar 2014) Noah: A film review from 32,000 ft (28 July 2014) The rationalist spa (11 Sep 2015) Talking about tax: weasel words ( 19 Apr 2016) Controversial statues: remove or revise? (22 Dec 2016) The alt-right guide to fielding conference questions (18 Feb 2017) My most popular posts of 2016 (2 Jan 2017) An index of neighbourhood advantage from English postcode data ( 15 Sep 2018) Working memories: A brief review of Alan Baddeley's memoir ( 13 Oct 2018)

Monday, 27 July 2020

TEF in the time of pandemic

An article in the Times Higher today considers the fate of the Teaching Excellence Framework (TEF).  I am a long-term critic of the TEF, on the grounds that it lacks an adequate rationale,  has little statistical or content validity, is not cost-effective, and has the potential to mislead potential students about the quality of teaching in higher education institutions. For a slideshow covering these points, see here. I was pleased to be quoted in the Times Higher article, alongside other senior figures in higher education, who were in broad agreement that the future of TEF now seems uncertain. Here I briefly document three of my concerns.

First, the fact that the Pearce Review has not been published is reminiscent of the Government's strategy of sitting on reports that it finds inconvenient. I think we can assume the report is not a bland endorsement of TEF, but rather that it did identify some of the fundamental statistical problems with the methodology of TEF, all of which just get worse when extended down to subject-level TEF. My own view is that subject-level TEF would be unworkable. If this is what the report says, then it would be an embarrassment for government, and a disappointment for universities who have already invested in the exercise. I'm not confident that this would stop TEF going ahead, but this may be a case where after so many changes of minister, the government would be willing to either shelve the idea (the more sensible move) or just delay in the hope they can overcome the problems.

Second, the whole nature of teaching has changed radically in response to the pandemic. Of course, we are all uncertain of the future, and institutions vary in terms of their predictions, but what I am hearing from the experts in pandemics is that it is wrong to imagine we are living through a blip after which we will return to normal. Some staff are adapting well to the demand for online teaching, but this is going to depend on how far teaching requires a practical element, as well as on how tech-savvy individual teaching staff are. So, if much teaching stays online, then we'd be evaluating universities on a very different teaching profile than the one assessed in TEF.

Finally, there is wide variation in how universities are responding to the impact of the pandemic on staff. Some are making staff redundant, especially those on short-term contracts, and many are in financial difficulties. Jobs are being frozen. Even in well-established universities such as my own, there are significant numbers of staff who are massively impacted by having children to care for at home. Overall, what it means is that the teaching that is delivered is not only different in kind, but actual and effective staff/student ratios are likely to go down.

So my bottom line is that even if the TEF methodology worked (and it doesn't), it's not clear that the statistics used for it would be relevant in future. I get the impression that some HEIs are taking the approach that the show must go on, with regard to both REF and TEF, because they have substantial sunk costs in these exercises (though more for REF than TEF). But staff are incredibly hard-pressed in just delivering teaching and I think enthusiasm for TEF, never high, is at rock bottom right now. 

At the annual lecture of the Council for Defence of British Universities in 2018 I argued that TEF should have been strangled at birth. It has struggled on in a sickly and miserable state since 2015. It is now time to put it out of its misery.

Sunday, 12 July 2020

'Percent by most prolific' author score: a red flag for possible editorial bias

(This is an evolving story: scroll to end of post for updates)

This week has seen a strange tale unfold around the publication practices of Professor Mark Griffiths of Nottingham Trent University. Professor Griffiths is an expert in the field of behavioural addictions, including gambling and problematic internet use. He publishes prolifically, and in 2019 published 90 papers, meeting the criterion set by Ioannidis et al (2018) for a hyperprolific author.

More recently, he has published on behavioural aspects of reactions to the COVID-19 pandemic, and he is due to edit a special issue of the International Journal of Mental Health and Addiction (IJMHA) on this topic.

He came to my attention after Dr Brittany Davidson described her attempt to obtain data from a recent study published in IJMHA reporting a scale for measuring fear of COVID-19. She outlined the sequence of events on PubPeer.  Essentially Griffiths, as senior author, declined to share the data, despite there being a statement in the paper that the data would be available on request. This was unexpected, given that in a recent paper about gaming disorder research, Griffiths had written:
'Researchers should be encouraged to implement data-sharing procedures and transparency of research procedures by pre-registering their upcoming studies on established platforms such as the Open Science Framework (https://osf.io). Although this may not be entirely sufficient to tackle potential replicability issues, it will likely increase the robustness and transparency of future research.'
It is not uncommon for authors to be reluctant to share data if they have plans to do more work on a dataset, but one would expect the journal editor to take seriously a breach of a statement in the published paper. Dr Davidson reported that she did not receive a reply from Masood Zangeneh, the editor of IJMHA.

This lack of editorial response is concerning, especially given that the IJMHA is a member of the Committee on Publication Ethics (COPE) and Prof Griffiths is an Advisory Editor for the journal. When I looked further, I found that in the last five years, out of 644 articles and reviews published in the journal, 80 (12.42%) have been co-authored by Griffiths. Furthermore, he was co-author on 51 of 384 (13.28%) of articles in the Journal of Behavioral Addictions (JBA). He is also on the editorial board of JBA, which is edited by Zsolt Demetrovics, who has coauthored many papers with Griffiths.

This pattern may have an entirely innocent explanation, but public confidence in the journals may be dented by such skew in authorship, given that editors have considerable power to give an easy ride to papers by their friends and collaborators. In the past, I found a high rate of publication by favoured authors in certain journals was an indication of gaming by editors, detectable by the fact that papers by favoured authors had acceptance times far too short to be compatible with peer review. Neither IJMHA nor JBA publishes the dates of submission and acceptance of articles, and so it is not possible to evaluate this concern.

We can however ask, how unusual is it for a single author to dominate the profile of publications in a journal? To check this out, I did an analysis as follows:

1. I first identified a set of relevant journals in this field of research, by identifying papers that cited Griffiths' work. I selected journals that featured at least 10 times on that list. There were 99 of these journals, after excluding two big generalist journals (PLOS One and Scientific Reports) and one that was not represented on Web of Science.

2. Using the R package, wosr, I searched on Web of Science for all articles and reviews published in each journal between 2015 and 2020.

This gave results equivalent to a manual search such as: PUBLICATION NAME: (journal of behavioral addictions) AND DOCUMENT TYPES: (Article OR Review) Timespan: 2015-2020. Indexes: SCI-EXPANDED, SSCI, A&HCI, CPCI-S, CPCI-SSH, BKCI-S, BKCI-SSH, ESCI, CCR-EXPANDED, IC.

3. Next I identified the most prolific author for each journal, defined as the author with the highest number of publications in each journal for the years 2015-2020.

4. It was then easy to compute the percentage of papers in the journal that included the most prolific author. The same information can readily be obtained by a manual search on Web of Science by selecting Analyse Results and then Authors – this generates a treemap as in Figure 1.
Figure 1: Screenshot of 'Analyse Results' from Web of Science

A density plot of the distribution of these 'percent by most prolific' scores is shown in Figure 2, and reveals a bimodal distribution with a small hump at the right end corresponding to journals where 8% or more articles are contributed by a single prolific author. This hump included IJMHA and JBA.

Figure 2: Distribution of % papers by most prolific author for 99 journals

This exercise confirmed my impression that these two journals are outliers in having such a high proportion of papers contributed by one author – in this case Griffiths - as shown in Figure 3. It is noteworthy that a few journals have authors who contributed a remarkably high number of papers, but these tended to be journals with very large numbers of papers (on the right hand side of Figure 3), and so the proportion is less striking. The table corresponding to Figure 3, and the script used to generate the summary data, are available here.

Figure 3: Each point corresponds to one journal: scatterplot shows the N papers and percentage of papers contributed by the most prolific author in that journal

I then repeated this same procedure for the journals involved in bad editorial practices that I featured in earlier blogposts. As shown in Table 1, this 'percent by most prolific' score was also unusually high for those journals during the period when I identified overly brief editorial decision times, but has subsequently recovered to something more normal under new editors. (Regrettably, the publishers have taken no action on the unreviewed papers in these journals, which continue to pollute the literature in this field.)

JournalYear range N articlesMost prolific author% by prolific
Research in Developmental Disabilities2015-2019972Steenbergen B1.34
2010-20141665Sigafoos J3.78
2005-2009337Matson JL9.2
2000-2004173Matson JL8.09
Research in Autism Spectrum Disorders2015-2019448Gal E1.34
2010-2014777Matson JL10.94
2005-2009182Matson JL15.93
J Developmental and Physical Disabilities2015-2019279Bitsika V4.3
2010-2014226Matson JL10.62
2005-2009187Matson JL9.63
2000-2004126Ryan B3.17
Developmental NeuroRehabilitation2015-2019327Falkmer T3.98
2010-2014252Matson JL13.89
2005-200973Haley SM5.48

Table 1: Analysis of 'percentage by most prolific' publications in four journals with evidence of editorial bias. Those with '% most prolific' scores > 8 are shown in pink.

Could the 'percent by most prolific' score be an indicator of editorial bias? This cannot be assumed: it could be the case that Griffiths produces an enormous amount of high quality work, and chooses to place it in one of two journals that have a relevant readership. Nevertheless, this publishing profile, with one author accounting for more than 10% of the papers in two separate journals, is unusual enough to raise a red flag that the usual peer review process might have been subverted. That flag could easily be lowered if we had information on dates of submission and acceptance of papers, or, better still, open peer review.

I will be writing to Springer, the publisher of IJMHA, and AK Journals, the publisher of JBA, to recommend that they investigate the unusual publication patterns in their journals, and to ask that in future they explicitly report dates of submission and acceptance of papers, as well as the identity of the editor who was responsible for the peer review process. A move to open peer review is a much bigger step adopted by some journals that has been important in giving confidence that ethical publishing practices are followed. Such transparent practices are important not just for detecting problems, but also for ensuring that question marks do not hang unfairly over the heads of authors and editors.

**Update** 20th July 2020.
AK Journals have responded with a very prompt and detailed account of an investigation that they have conducted into the publications in their journal, which finds no evidence of any preferential treatment of papers by Prof Griffiths. See their comment below.  Note also that, contrary to my statement above, dates of receipt/decision for papers in JBA are made public: I could not find them online but they are included in the pdf version of papers published in the journal.

**Update2** 21st July 2020
Professor Griffiths has written two blogposts responding to concerns about his numerous publications in JBA and IJMHA.
In the first, he confirms that the papers in both journals were properly peer-reviewed (as AK journals have stated in their response), and in the second, he makes the case that he met criteria for authorship in all papers, citing endorsements from co-authors.   
I will post here any response I get from IJMHA. 



Saturday, 6 June 2020

Frogs or termites? Gunshot or cumulative science?


"Tell us again about Monet, Grandpa."

The tl;dr version of this post is that we're all so obsessed with doing new studies that we disregard prior literature. This is largely due to a scientific culture that gives disproportionate value to novel work. This, I argue, weakens our science.

This post has been brewing in my mind ever since I took part in a reading group about systematic reviews. We were discussing the new NIRO guidelines for systematic reviews outside the clinical trials context that are under development by Marta Topor and Jade Pickering. I'd been recommending systematic review as a useful research contribution that could be undertaken when other activities had stalled because of the pandemic. But the enthusiasm of those in the reading group seemed to wane as the session progressed. Yes, everyone agreed, the guidelines were excellent: clear and comprehensive. But it was evident that doing a proper review would not be a "quick win"; the amount of work would of course depend on the number of papers on a topic, but even for a circumscribed subject it was likely to be substantial and involve close reading of a lot of material. Was it a good use of time, people asked. I defended the importance of looking at past literature: it's concerning if we don't read scientific papers because we are all so busy writing them. To my mind, being a serious scholar means being very familiar with past work in a subject area. However, it's concerning that our reward system doesn't value that, making early-career researchers nervous about investing time in it.

The thing that prompted me to put my thoughts into words was a tweet I saw this morning by Mike Johansen (@mikejohansenmd). It seems at first to be on an unrelated topic, but I think it is another symptom of the same issue: a disregard for prior literature. Mike wrote:
Manuscripts should look like: Question: Methods: Results: Limitations: Figures/Tables: Who does these things? Things that don't matter: introduction, discussion. Who does these things?
I replied that he seemed to be recommending that we disregard the prior literature, which I think is a bad idea. I argued "One study is never enough to answer a question - important to consider how this study fits in - or if it doesn't , why."

Noah Haber (@noahhaber) jumped in at this point to say: 
I'm sympathetic (~45% convinced) to the argument that literature reviews in introductions do more harm than good. In practice, they are rarely more than cursory and uncritical, and make us beholden to ideas that have long outlived their usefulness. Space better used in methods.
But I don't think that's a good argument. I'm the first to agree that literature reviews are usually terrible: people only cite the work that confirms their position, and often do that inaccurately. You can see slides from a talk I gave on 'Why your literature review should be systematic' here. But I worry if the response to current unscholarly and biased approaches to the literature is to say that we can just disregard the literature. If you assume that the study you are doing is so important that you don't have time to read other people's studies, it is on the one hand illogical (if we all did that, who would read your studies), on the other hand disrespectful to fellow scientists, and on the most important third hand (yes, assume a mutant for now) bad for science.

Why is it bad for science? Because science seldom advances by a single study. Solid progress is made when work is cumulative. We have far more confidence in a theory that is supported by a series of experiments than by a single study, however large the effect. Indeed, we know that studies heralding a novel result often overestimate the size of effect – the "winner's curse". So to interpret your study, I want to know how far it is consistent with prior work, and if it isn't whether there might be a good reason for that.

Alas, this approach to science is discouraged by many funders and institutions: calls for research proposals are peppered with words such as "groundbreaking", "transformational", and "novel". There is a horror of doing work that is merely "cumulative". As a consequence, many researchers hop around like frogs in a lilypond, trying to land on a lilypad that is hiding buried treasure. It may sound dull, but I think we should model ourselves more on termites – we can only build an impressive edifice if we collaborate to each do our part and build on what has gone before.

Of course, the termite mound approach is a disaster if the work we try to build on is biased, poorly conducted and over-hyped. Unfortunately that is often the case, as noted by Noah. We come rather full circle here, because I think a motivation for Mike and Noah's tweets is recognition of the importance of reporting work in a way that will make it a solid foundation for a cumulative science of the future. I'm in full agreement with that. Where I disagree, though, is in how we integrate what we are doing now with what has gone before. We do need to see what we are doing as part of a cumulative, collaborative process in taking ideas forward, rather than a series of single-shot studies.

Friday, 29 May 2020

Boris Bingo: Strategies for (not) answering questions


On Wednesday 27th May, the Prime Minister, Boris Johnson, appeared before the House of Commons Liaison Committee, to answer questions about the coronavirus crisis. The Liaison Committee is made up of all the Chairs of Select Committees, which are where much of the serious business of government is done. The proceedings are available online, and contrast markedly with Hansard reports from the House of Commons, where the atmosphere is typically gladiatorial, with a lot of political point-scoring. In Select Committees, members from a mix of parties aim to work constructively together. It is customary for the Prime Minister to give evidence to the Liaison Committee three times a year, but this was Boris Johnson's first appearance.

The circumstances were extraordinary. The PM himself did not look well: perhaps not surprising when one considers that he was in intensive care with COVID-19 in April, only leaving hospital on 12th April, with a new baby born on 29th April. Since then, the UK achieved the dubious distinction of having one of the worst rates of COVID-19 infection in the world. Then, last weekend a scandal broke around Dominic Cummings, Chief Advisor to the PM, who gave a Press Conference on Monday to explain why he had been travelling around the country with his wife and son, when both he and his wife had suspected COVID-19.

I watched the Liaison Committee live on TV and was agog. There had been fears that the Chair, Bernard Jenkin, would give the PM an easy time. He did not; he chaired impeccably, ensuring committee members stuck to time and that the PM stuck to the point. Questions were polite but challenging, regardless of the political affiliation of the committee member. Did the PM rise to the challenge? This was not the sneering, combative PM that we saw in Brexit debates – he, no doubt, could see that would not go down well with this committee. Rather, the impression he gave was of a man who was winging it and relying on his famous charm in the hope that bluster and bonhomie would win the day. Alas, they did not.

Intrigued by Johnson's strategy – if it can be called that – for answering questions,  I have spent some time poring over the transcript of the proceedings, and realised in so doing that I have the material for a new Bingo game. When watching the PM answer questions, you have a point for each of the following strategies you identify. If you do the drinking game version, it may ease the angst otherwise generated by listening to the leader of our nation.

Paltering

This term refers to a common strategy of politicians of appearing to answer a question, without actually doing so. It can give at least a superficial impression that the question has been answered, while deflecting to a related topic. In the following exchange, 'I have no reason to believe' is a big red flag for paltering. The Chair asked what advice the PM had sought from the Cabinet Secretary about Cummings' behaviour in relation to compliance with the code of integrity, and the PM replied:
I have no reason to believe that there is any dissent from what I said a few days ago.
Asked whether Scottish and Welsh first ministers had any influence on the approach to lockdown (Q14)
Stephen, we all work together, and I listen very carefully to what Mark says, to what Arlene and Michelle say, to what Nicola says. Of course we think about it together.
Response to Jeremy Hunt on why there were delays in implementing testing
As you know, Jeremy, we faced several difficulties with this virus. First, this was a totally new virus and it had some properties that everybody was quite slow to recognise across the world. For instance, it is possible to transmit coronavirus when you are pre-symptomatic—when you do not have symptoms—and I do not think people understood that to begin with.
When Hunt later asked the straightforward question "Why don't we get our test results back in 24 hours", (Q45) Johnson replied:
That is a very good question. Actually, we are reducing the time—the delay—on getting your test results back. I really pay tribute to Dido Harding and her team. The UK is now testing more people than any other country in Europe. She has got a staff now of 40,000 people, with 7,500 clinicians and 25,000 trackers in all, and they are rapidly trying to accelerate the turnaround time.
When asked by Caroline Nokes about the specific impact of phased school opening on women's ability to get back to work (Q73), Johnson answered a completely different question:
I think your question, Caroline, is directed at whether or not we have sufficient female representation at the top of Government helping us to inform these decisions, and I really think we have

Vagueness
This could take the form of bland agreement with the questioner, but without any clear commitment to action. Greg Clark asked (Q27) why we have a policy of 2 meters for social distancing when the WHO recommends 1 meter. The response was
...You are making a very important point, one that I have made myself several times—many times—in the course of the debates that we have had.
Pressed further on whether he had asked SAGE whether the 2-meter rule could be revised (Q32) he replied
I can not only make that commitment—I can tell you that I have already done just that, so I hope we will make progress.
Asked about firms who put their employees on furlough and then threatened them with redundancy (Q99), Johnson agreed this was a Very Bad Thing, but did not actually undertake to do anything about it.
...You are raising a very important point, Huw. This country is nothing without its workforce—its labour. We have to look after people properly, and I am well aware of some of the issues that are starting to arise. People should not be using furlough cynically to keep people on their books and then get rid of them. We want people back in jobs. We want this country back on its feet. That is the whole point of the furlough scheme.
Asked about how the Cabinet were consulted about the unprecedented Press Conference by Cummings (Q9), the PM was remarkably vague, replying:
...I thought that it would be a very good thing if people could understand what I had understood myself previously, I think on the previous day, about what took place—and there you go. We had a long go at it.
Asked to be specific about advice to parents who are in the same situation as Dominic Cummings re childcare (Q 21)
...The clear advice is to stay at home unless you absolutely have to go to work to do your job. If you have exceptional problems with childcare, that may cause you to vary your arrangements; that is clear.
The use of the word 'clear' in the PM responses is often a flag for vagueness.

A direct question by Greg Clark on whether contact tracing was compulsory or advisory (Q34) led to a confused answer:
We intend to make it absolutely clear to people that they must stay at home, but let me be clear—
When the questioner followed up to ask whether it was law or advice, he continued:
We will be asking people to stay at home. If they do not follow that advice, we will consider what sanctions may be necessary—financial sanctions, fines or whatever.
It is not always easy to distinguish vagueness from paltering. The PM has a tendency to agree that something is a Very Good Thing, to speak in glowing and over-general terms about initiatives, and about his desire to implement them, without any clear commitment to do more than 'looking at' them. Here he is responding to Robert Halfon on whether there will be additional resources for children whose education has been adversely affected by the shutdown (Q63)
The short answer is that I want to support any measures we can take to level up. You know what we want to do in this Government. There is no doubt that huge social injustice is taking place at the moment because some kids are going to have better access to tutoring and to schooling at home, and other kids are not going to get nearly as much, and that is not fair.
and again, when Halfon asked about apprenticeships (Q64)
All I will say to you, Rob, is that I totally agree that apprenticeships can play a huge part in getting people back on to the jobs market and into work, and we will look at anything to help people.
Halfon pressed on, asking for an apprenticeship guarantee, but the PM descended further into vagueness.
We will be doing absolutely everything we can to get people into jobs, and I will look at the idea of an apprenticeship guarantee. I suppose it is something that we would have to work with employers to deliver.
Other examples came from answers to Darren Jones, who asked about financial support to different sectors, and payments after the furlough scheme ended; e.g. the response to Q89:
We are going to do everything we can, Darren, to get everybody back into work.

Deferral

This was the first strategy to appear, in response to a question by the Chair (Q2) about when the committee might expect to see him again. Johnson made it clear he wasn't going to commit to anything:
You are very kind to want to see me again more frequently, even before we have completed this session, but can I possibly get back to you on that? Obviously, there is a lot on at the moment.
Stephen Timms asked about people who were destitute because, despite having leave to remain, they had no recourse to public funds when they suddenly lost their jobs (Q68). The PM responded:
I am going to have to come back to you on that, Stephen.
It is perhaps unfair to count this one as deliberate strategy: Johnson seemed genuinely baffled as to how 100,000 children could be living in destitution in a civilised country.

When asked by Mel Stride about whether there would be significant increases in the overall tax burden, the PM replied:
I understand exactly where you are going with your question, Mel, but I think you are going to have to wait, if you can, until the Chancellor, Rishi Sunak, brings forward his various proposals.

Refusal to answer

Refusals were mostly polite. An illustration appeared early in the proceedings, when asked by the Chair about Dominic Cummings (Q6), the PM replied:
I do think that is a reasonable question to ask, but as I say, we have a huge amount of exegesis and discussion of what happened in the life of my adviser between 27 March and 14 April. Quite frankly, I am not certain, right now, that an inquiry into that matter is a very good use of official time. We are working flat out on coronavirus.
So the question is accepted as reasonable, but we are asked to understand that it is not high priority for a PM in these challenging times.

Asked by Meg Hillier whether the Cabinet Secretary should see evidence provided by Cummings, the PM responds that this is inappropriate – again arguing this would be a distraction from higher priorities:
I think, actually, I would not be doing my job if I were now to shuffle this problem into the hands of officials, who are—believe me, Meg—working flat out to deal with coronavirus, as the public would want.
At times, when paltering had been detected, and a follow-up question put him on the spot, Johnson simply dug his heels in, often claiming to have already answered the question. Asked whether the Cabinet Secretary has interviewed Cummings (Q8), Johnson replied:
I am not going to go into the discussions that have taken place, but I have no reason to depart from what I have already said.
And asked whether he'd seen evidence to prove that allegations about Cummings were false (Q17), the PM again replied:
I don’t want to go into much more than I have said—
Asked by Jeremy Hunt on when a 24-hour test turnaround time would be met (Q48), the rather remarkable reply was:
I am not going to give you a deadline right now, Jeremy, because I have been forbidden from announcing any more targets and deadlines.

Challenge questioner

This strategy where unwelcome questions were dismissed as either having false premises, and/or being politically motivated. Pete Wishart (SNP) asked if Cummings' behaviour would make people less likely to obey lockdown rules (Q10). Johnson did not engage with the question, denied any wrongdoing by Cummings and added:
Notwithstanding the various party political points that you may seek to make and your point about the message, I respectfully disagree.
Similar phrases are seen in response to Yvette Copper (Q24), who was accused of political point-scoring, and then blamed for confusing the British public (see also Churchillian gambit, below):
I think that this conversation, to my mind, has illuminated why it is so important for us to move on, and be very clear with the British public about how we want to deal with that, and how we want to make progress. And, frankly, when they hear nothing but politicians squabbling and bickering, it is no wonder that they feel confused and bewildered.
And in response to a similar point from Simon Hoare (Q25)
...what they [the people] want now is for us to focus on them and their needs, rather than on a political ding-dong about what one adviser may or may not have done

False claim

This doesn't always involve lying; it can be unclear whether or not the PM knows what is actually the case. But there was at least one instance in his evidence where what he said is widely reported as untrue. Intriguingly, this was not an answer to a direct question, but rather an additional detail when asked about testing in care homes by Jeremy Hunt (Q44)
Do not forget that, as Chris Hopson of NHS Providers has said, every discharge from the NHS into care homes was made by clinicians, and in no case was that done when people were suspected of being coronavirus victims. Actually, the number of discharges from the NHS into care homes went down by 40% from January to March, so it is just not true that there was some concerted effort to move people out of NHS beds into care homes. That is just not right.
A report by ITV news asserted that, contrary to this claim, places in hospitals were block booked for discharged NHS patients.

The Churchillian gambit

When allowed to divert from answering questions, the PM would attempt the kind of rhetoric that had been so successful in Brexit debates, referring to what 'the people' wanted, and to government attempts to 'defeat the virus'.
For instance, this extended response to Q9 re Dominic Cummings;
What we need to do really is move on and get on to how we are going to sort out coronavirus, which is really the overwhelming priority of the people of this country
After a lengthy inquisition by Yvette Cooper, culminating in a direct question about whether he put Dominic Cummings above the national interest (Q24), we again had the appeal to what the British public want.
I think my choice is the choice that the British people want us all to make, Yvette, and that is, as far as we possibly can, to lay aside party political point-scoring, and to put the national interest first, and to be very clear with the British public about what we want to do and how we want to take this country forward.
Overall, there were four mentions of 'getting the country back on its feet', including this statement, appended to a question on whether sanctions would be needed to ensure compliance with contact tracing (Q58)
Obviously, we are relying very much on the common sense of the public to recognise the extreme seriousness of this. This is our way out. This is our way of defeating the virus and getting our country back on its feet, and I think people will want to work together-
And in response to a further request for clarification about Dominic Cummings from Darren Jones (Q94)
It is my strong belief that what the country wants is for us to be focusing on how to go forward on the test and trace scheme that we are announcing today, and on how we are going to protect their jobs and livelihoods, and defeat this virus.
In all these exchanges, the 'British people' are depicted as decent, long-suffering people, who are having a bad time, and may be anxious or confused. During Brexit debates, this might have worked, but the problem is that now a large percentage of people of all political stripes are just plain angry, and telling them that they want to 'move on' just makes them angrier.

Ironic politeness

The final characterstic has less to do with content of answers than with their style. British political discourse is a goldmine for researchers in pragmatics – the study of how language is used. Attacking your opponent in obsequiously polite language has perhaps arisen in response to historical prohibitions on uncivil discourse in the House of Commons. Boris Johnson is a master of this art, which can be used to put down an opponent while getting a laugh from the audience. He had to be careful with the Liaison Committee, but his comments that they were 'kind to want to see me' and that he was 'delighted to be here today' were transparently insincere, and presumably designed to amuse the audience while establishing his dominance as someone who could choose whether to attend or not.

The final exchange between the Chair and the PM was priceless. The PM reiterated his enjoyment of his session with the committee but refused to undertake to return, because he was 'working flat out to defeat coronavirus and get our country back on its feet'. The Chair replied:
I should just point out that the questions on which you hesitated and decided to go away and think were some of the most positive answers you gave, in some respects. That is where we want to help. I hope you will come back soon.
I read that to mean, on the one hand, most answers were useless, but on the other hand, where the PM had pleaded for deferral, he would be held to account, and required to provide responses to the Committee in future. We shall see if that happens.