Sunday 6 December 2020

Faux peer-reviewed journals: a threat to research integrity

 

Despite all its imperfections, peer review is one marker of scientific quality – it indicates that an article has been evaluated prior to publication by at least one, and usually several, experts in the field. An academic journal that does not use peer review would not usually be regarded as a serious source and we would not expect to see it listed in a database such as Clarivate Analytic's Web of Science Core Collection which "includes only journals that demonstrate high levels of editorial rigor and best practice". Scientists are often evaluated by their citations in Web of Science, with the assumption that this will include only peer-reviewed articles. This makes gaming of citations harder than is the case for less selective databases such as Google Scholar. The selective criteria for inclusion, and claims by Clarivate Analytics to take research integrity very seriously, are presumably the main reasons why academic institutions are willing to pay for access to Web of Science, rather than relying on Google Scholar. 

Nevertheless, some journals listed in Web of Science include significant numbers of documents that are not peer-reviewed. I first became aware of this when investigating the publishing profiles of authors with remarkably high rates of publications in a small number of journals. I found that Mark Griffiths, a hyperprolific author who has been interviewed about his astounding rate of publication by the Times Higher Education, has a junior collaborator, Mohammed Mamun, who clearly sees Griffiths as a role model and is starting to rival him in publication rate. Griffiths is a co-author on 31 of 33 publications authored by Mamun since 2019. While still an undergraduate, Mamun has become the self-styled Director of the Undergraduate Research Organization in Dhaka, subsequently renamed as the Centre for Health Innovation, Networking, Training, Action and Research – Bangladesh. These institutions do not appear to have any formal link with an academic institution, though on ORCID, Mamun lists an ongoing educational affiliation to Jahangirnagar University. His H-index from Web of Science is 11. This drops if one excludes self-citations, which constitute around half of his citations, but nevertheless, this is remarkable for an undergraduate.

Of the 31 papers listed on Web of Science as coauthored by Mamun and Griffiths, 19 are categorised as letters to the Editor. Letters are a rather odd and heterogeneous category. In most journals they will be brief comments on papers published in the journal, or responses to such comments, and in such cases it is not unusual for the editor to make a decision to publish or not without seeking peer review. However, the letters coauthored by Griffiths and Mamun go beyond this kind of content, and include some brief reports of novel data, as well as case reports on suicide or homicide gleaned from press reports*. I took a closer look at three journals where these letters appeared to try and understand how such material fitted in with their publication criteria. 

The International Journal of Mental Health and Addiction (Springer) featured in an earlier blogpost, on the basis of publishing a remarkably high number of articles authored by Griffiths. In that analysis I did not include letters. The journal gives no guidelines about the format or content of letters, and has published only 16 of them since 2019, but 10 of these are authored by Griffiths, mostly with Mamun. As noted in my prior blogpost, the journal provides no information about dates of submission and acceptance, so one cannot tell whether letters were peer-reviewed. The publisher told me last July and confirmed again in September that they are investigating the issues I raised in my last blogpost, but there has to date been no public report on the outcome.  

Psychiatry Research, published by Elsevier, is explicit that Case Reports can be included as letters, and provides formatting information (750-1000 words or less, up to 5 references, no tables or figures). Before 2019, around 2-4% of publications in the journal were letters, but this jumped to an astounding 22.4% in 2020, perhaps reflecting what has been termed 'covidization' of research.

The Asian Journal of Psychiatry (AsJP), also published by Elsevier, provides formatting information only (600-800 words, 10 references, 1 figure or table), and does not specify what would constitute the content of letters, but it publishes an extremely large number of them, as shown in Figure 1. This trend started before 2020, so cannot be attributed solely to COVID-mania. 

 

Figure 1: Categorization of documents published in Asian Journal of Psychiatry between 2015 and 2020.

For most journals, letters constitute a negligible proportion of their output, and so it is unlikely to have much impact whether or not they are peer reviewed. However, for a journal like AsJP, where letters outnumber regular articles, the peer review status of letters becomes of interest.  

The only information one can obtain on this point is indirect, by scanning the submission and acceptance dates of articles to see if the lag between these two is so short as to suggest there was no peer review. Relevant data for all letters published in AsJP and Psychiatry Research are shown in Table 1. It is apparent that there is a wide range of publication lags, some extending for weeks or months, but that lags of 7 days or less are not uncommon. There is no indication that Mamun/Griffiths have favourable treatment, but they benefit from the same rapid acceptance rate as other letters, with 40-50% chance of acceptance within two weeks.  

 

Table 1: Proportions of Letters categorized by publication lag in two journals publishing a large number of Letters. 

Thus an investigation into unusual publishing patterns by one pair of authors has drawn attention to at least two journals that appear to provide an easy way to accumulate a large number of publications that are not peer-reviewed but are nevertheless cited in Web of Science. If one adds a high rate of self-citation to the mix, and a willingness of editors to accept recycled newspaper reports as case studies, one starts to understand how it is possible for an inexperienced undergraduate to build up a substantial number of citations. 

I have drawn this situation to the attention of the integrity officers at Springer and Elsevier, but my previous experience with the Matson publication ring does not instil me with confidence that publishers will take any steps to monitor or control such editorial practices.  

Clarivate Analytics recently published a report on Research Integrity, in which they note how "all manner of interventions and manipulations are nowadays directed to the goal of attaining scores and a patina of prestige, for individual or journal, although it may be a thin coat hiding a base metal." They described various methods used by researchers to game citations, but did not include the use of non-peer-reviewed categories of paper to gain numerous citations. They describe technological solutions to improve integrity, but I would argue they need to add to their armamentarium a focus on the peer review process. The lag between submission and acceptance is a far from perfect indicator but it can give a rough proxy for the likelihood that peer review was undertaken. Requiring journals to make this information available, and to include it in the record of Web of Science, would go some way to identifying at least one form of gaming. 

No doubt Prof Griffiths regards himself as a benign influence, helping an enthusiastic and energetic young person from a low-income country establish himself. He has an impressive number of collaborators from all over the world, many of whom have written in his support. Collaboration between those from very different cultures is generally to be welcomed. And I must stress that I have no objection to someone young and inexperienced making a scientific contribution - that is entirely in line with Mertonian norms of universalism. It is the departure from the norm of disinterestedness that concerns me. An obsession with 'publish or perish' leads to gaming of publications and citation counts as a way to get ahead. Research is treated like a game where the focus becomes one's own success rather than the advancement of science. This is a consequence of our distorted incentive structures and it has a damaging effect on scientific quality. It is frankly depressing to see such attitudes being inculcated in junior researchers from around the world.

 *Addendum

Letters co-authored by Mamun and Griffiths based on newspaper reports (from Web of Science). Self-cites refers to number of citations to articles by Griffiths and/or Mamun. Publication lags in square brackets calculated from date of submission and date of acceptance taken from the journal website. The script used to extract these dates, and .csv files with journal records, are available on: https://github.com/oscci/miscellaneous, see Faux Peer Review.rmd 

Mamun, MA; Griffiths, MD (2020) A rare case of Bangladeshi student suicide by gunshot due to unusual multiple causalities Asian Journal of Psychiatry, 49 10.1016/j.ajp.2020.101951 [5 self-cites, lag 6 days] 

Mamun, MA; Misti, JM; Griffiths, MD (2020) Suicide of Bangladeshi medical students: Risk factor trends based on Bangladeshi press reports Asian Journal of Psychiatry, 48 10.1016/j.ajp.2019.101905 [2 self-cites, lag 91 days] 

Mamun, MA; Chandrima, RM; Griffiths, MD (2020) Mother and Son Suicide Pact Due to COVID-19-Related Online Learning Issues in Bangladesh: An Unusual Case Report International Journal of Mental Health and Addiction, 10.1007/s11469-020-00362-5 [14 self-cites]

Bhuiyan, AKMI; Sakib, N; Pakpour, AH; Griffiths, MD; Mamun, MA (2020) COVID-19-Related Suicides in Bangladesh Due to Lockdown and Economic Factors: Case Study Evidence from Media Reports International Journal of Mental Health and Addiction, 10.1007/s11469-020-00307-y [10 self-cites]

Mamun, MA; Griffiths, MD (2020) Young Teenage Suicides in Bangladesh-Are Mandatory Junior School Certificate Exams to Blame? International Journal of Mental Health and Addiction, 10.1007/s11469-020-00275-3 [11 self-cites]

Mamun, MA; Siddique, A; Sikder, MT; Griffiths, MD (2020) Student Suicide Risk and Gender: A Retrospective Study from Bangladeshi Press Reports International Journal of Mental Health and Addiction, 10.1007/s11469-020-00267-3 [6 self-cites]

Griffiths, MD; Mamun, MA (2020) COVID-19 suicidal behavior among couples and suicide pacts: Case study evidence from press reports Psychiatry Research, 289 10.1016/j.psychres.2020.113105 [10 self-cites, lag161 days] 

Mamun, MA; Griffiths, MD (2020) PTSD-related suicide six years after the Rana Plaza collapse in Bangladesh Psychiatry Research, 287 10.1016/j.psychres.2019.112645 [0 self-cites, 2 days] 

 

Sunday 16 August 2020

PEPIOPs – prolific editors who publish in their own publications

I recently reported on a method for identifying authors who contribute an unusually high proportion of papers to a specific journal. As I noted in my previous blogpost, one cannot assume any wrongdoing just on the grounds of a high publication rate, but when there is a close link between the author in question and the journal's editor, then this raises concerns about preferential treatment and integrity of the peer review process.

In running my analyses, I also found some cases where there wasn't just a close link between the most prolific author in a journal and the editor: they were one and the same person! At around the time I was unearthing these results, Elisabeth Bik tweeted to ask if anyone had examples of editors publishing in their own journals. I realised the analysis scripts I had developed for the 'percent by most prolific' analysis could be readily adapted to look at this question, and so I analysed journals in the broad domain of psychology and behavioural science from six publishers: Springer Nature, Wiley, Taylor and Francis, Sage, Elsevier and American Psychological Association (APA). I focused on those responsible for editorial decisions – typically termed Editor-in-Chief or Associate Editor. Sometimes this was hard to judge: in general, I included 'Deputy editors' with 'Editors-in-Chief' if they were very few in number. I ignored members of editorial boards.

Before reporting the findings, I thought I should consult more widely about whether people think it is appropriate for an editor to publish in their own journal.

As a starting point, I looked at guidelines that the Committee on Publication Ethics (COPE) has provided for editors  

Can editors publish in their own journal? 
While you should not be denied the ability to publish in your own journal, you must take extra precautions not to exploit your position or to create an impression of impropriety. Your journal must have a procedure for handling submissions from editors or members of the editorial board that will ensure that the peer review is handled independently of the author/editor. We also recommend that you describe the process in a commentary or similar note once the paper is published.

They link to a case report that notes how this issue can be particularly problematic if the journal is in a narrow field with few publication outlets, and the author is likely to be identifiable even if blinding is adopted.

I thought that it would be interesting to see what the broader academic community thought about this. In a Twitter poll I asked specifically about Editors-in-Chief:

The modal response was that it was OK for an Editor-in-Chief to publish in their own journal at a modest rate (once a year or less). There was general disapproval of editors publishing prolifically in their own journals – even though I emphasised that I was referring to situations where the editorial process was kept independent of the author, as recommended by COPE.

My poll question was not perfectly phrased - I did not specify the type of article: it is usual, for instance, for an editor to write editorials! But I later clarified that I meant regular articles and reviews, and I think most people interpreted it in that way.

The poll provoked some useful debate among those who approved and disapproved of editors publishing in their own journals.

Let's start with some of the reasons against this practice. I should lay my cards on the table and state that personally I am in agreement with Antonia Hamilton, who tweeted to say that as Editor in Chief of the Quarterly Journal of Experimental Psychology she would not be submitting papers from her lab to that journal during her term of office, citing concerns about Conflict of Interest. When I was Editor-in-Chief of Journal of Child Psychology and Psychiatry, I felt the same: I was concerned that my editorial colleagues would be put in a difficult position if they had to handle one of my papers, and if my papers were accepted, then it might be seen as involving a biased decision, even if it that was not the case. Chris Chambers, who was among the 34% who thought an Editor-in-Chief should never publish in their own journal, expressed concerns about breaches of confidentiality, given that the Editor-in-Chief would be able to access identities of reviewers. Others, though, questioned whether that was the case at all journals.

Turning to those who thought it was acceptable for an Editor-in-Chief to publish in their own journal, several people argued that it would be unfair, not just on the editor themselves, but also on members of their research group, if presence of an editorial co-author meant they could not submit to the journal. How much of an issue this is will depend, of course, on how big your research group is, and what other publication outlets are available.  Several people felt there was a big difference between cases where the editor was lead author, and those where they played a more minor role. The extreme case is when an editor is a member of a large research consortium led by someone else. It would seem unduly restrictive to debar a paper by a consortium of 100 researchers just because one of them was the Editor in Chief of the journal. It is worth remembering too that, while being a journal editor is a prestigious role, it is hard work, and publishers may be concerned that it becomes a seriously unattractive option if a major outlet for papers is suddenly off-limits.

Be this as it may, the impression from the Twitter poll was that three papers or more per annum starts to look excessive. In my analysis, shown here I found several journals where the Editor-in-Chief had coauthored 15 or more articles (excluding editorials/commentaries) in their own journal between 2015 and 2019.  I refer to these as PEPIOPs (prolific editors who publish in their own publications). For transparency, I've made my methods and results available, so that others can extend the analysis if they wish to do so: see https://github.com/oscci/Bibliometric_analyses. More legible versions of the tables are also available.

Table 1 shows the number of journals with a PEPIOP, according to publisher. 
Table 1: Number of journals with prolific authors. Columns AE and EIC indicate cases where Associate Editor or Editor-in-Chief published 15+ papers in the journal between 2015-2019
The table makes it clear that it is relatively rare for an Editor-in-Chief to be a PEPIOP: there were no cases for APA journals, and the highest number was 5.6% for Elsevier journals. Note that there are big differences between publishers in terms of how common it is for any author to publish 15+ papers in the journal over 5 years: this was true for around 25% of the Elsevier and Springer journals, and much less common for Sage, APA and Taylor & Francis. This probably  reflects the subject matter of the journals - prolific publication, often on multiauthored papers, is more common in biosciences than social sciences.

Individual PEPIOPs are shown in Table 2. An asterisk denotes and Editor-in-Chief who authored more articles in the journal than any other person between 2015-2019.

These numbers should be treated with caution – I did not check whether any of these editors had only recently adopted the editorial role - largely because this information is not easy to find from journal websites*. I looked at a small sample of papers from individual PEPIOPs to see if there was anything I had overlooked, but I haven't checked every article - as there a great many of them.  There was one case where this revealed misclassification: the editor of Journal of Paediatrics and Child Health wrote regular short commentaries of around 800 words summarising other papers: this seems an entirely unremarkable thing for an editor to do, but they were classified by Web of Science as 'articles' which led to him being categorised as a PEPIOP.  This illustrates how bibliometrics can be misleading.
Table 2: Editors-in-chief who published 15+ papers in own journal 2015-2019
In general I did not find any ready explanation for highly prolific outputs.  And I where I did spot checks I found only one case where there was an explanatory note of the kind COPE recommended (in the journal Maturitas) - on the contrary, in many cases there was a statement confirming no conflict of interest for any of the authors. Sometimes there were lists of conflicts relating to commercial aspects, but it was clear that authors did not regard their editorial role as posing a conflict. It was also worth mentioning there were cases where an Editor-in-Chief was senior author.

The more I have pondered this, the more I realise that the reason why I am concerned particularly by Editor-in-Chief PEPIOPs is because this is the person who is ultimately responsible for integrity of the publication process. Although publishers increasingly are alert to issues of research integrity, journal websites typically advise readers to contact the Editor-in-Chief if there is a problem. Complaints about editorial decisions, demands for retractions, or concerns about potential fraud or malpractice all come to the Editor-in-Chief. That's fine provided the Editor-in-Chief is a pillar of respectability. The problem is that not all editors are the paragons that we'd like them to be: one can adopt rather a jaundiced view after encountering editors who don't even bother to reply to expressions of concern, or adopt a secretive or dismissive attitude if plausible problems in their journal are flagged up. And there are also cases on record where an editor has abused the power of their position to enhance their own publication record. For this reason, I would strongly advise any Editor-in-Chief not to be a PEPIOP; it just looks bad, even if a robust, independent editorial process has been followed. We need to have confidence and trust in those who act as gatekeepers to journals.

My principal suggestion is that publishers could improve their record for integrity if they went beyond just endorsing COPE guidelines and started to actively check whether those guidelines are adhered to. 

*Note: 21st Aug 2020: A commenter on this blogpost has noted that Helai Huang fits this category - this editor has only been in post since 1 Jan 2020, so the period of prolific publication predated being an editor-in-chief.

Monday 27 July 2020

TEF in the time of pandemic

An article in the Times Higher today considers the fate of the Teaching Excellence Framework (TEF).  I am a long-term critic of the TEF, on the grounds that it lacks an adequate rationale,  has little statistical or content validity, is not cost-effective, and has the potential to mislead potential students about the quality of teaching in higher education institutions. For a slideshow covering these points, see here. I was pleased to be quoted in the Times Higher article, alongside other senior figures in higher education, who were in broad agreement that the future of TEF now seems uncertain. Here I briefly document three of my concerns.

First, the fact that the Pearce Review has not been published is reminiscent of the Government's strategy of sitting on reports that it finds inconvenient. I think we can assume the report is not a bland endorsement of TEF, but rather that it did identify some of the fundamental statistical problems with the methodology of TEF, all of which just get worse when extended down to subject-level TEF. My own view is that subject-level TEF would be unworkable. If this is what the report says, then it would be an embarrassment for government, and a disappointment for universities who have already invested in the exercise. I'm not confident that this would stop TEF going ahead, but this may be a case where after so many changes of minister, the government would be willing to either shelve the idea (the more sensible move) or just delay in the hope they can overcome the problems.

Second, the whole nature of teaching has changed radically in response to the pandemic. Of course, we are all uncertain of the future, and institutions vary in terms of their predictions, but what I am hearing from the experts in pandemics is that it is wrong to imagine we are living through a blip after which we will return to normal. Some staff are adapting well to the demand for online teaching, but this is going to depend on how far teaching requires a practical element, as well as on how tech-savvy individual teaching staff are. So, if much teaching stays online, then we'd be evaluating universities on a very different teaching profile than the one assessed in TEF.

Finally, there is wide variation in how universities are responding to the impact of the pandemic on staff. Some are making staff redundant, especially those on short-term contracts, and many are in financial difficulties. Jobs are being frozen. Even in well-established universities such as my own, there are significant numbers of staff who are massively impacted by having children to care for at home. Overall, what it means is that the teaching that is delivered is not only different in kind, but actual and effective staff/student ratios are likely to go down.

So my bottom line is that even if the TEF methodology worked (and it doesn't), it's not clear that the statistics used for it would be relevant in future. I get the impression that some HEIs are taking the approach that the show must go on, with regard to both REF and TEF, because they have substantial sunk costs in these exercises (though more for REF than TEF). But staff are incredibly hard-pressed in just delivering teaching and I think enthusiasm for TEF, never high, is at rock bottom right now. 

At the annual lecture of the Council for Defence of British Universities in 2018 I argued that TEF should have been strangled at birth. It has struggled on in a sickly and miserable state since 2015. It is now time to put it out of its misery.

Sunday 12 July 2020

'Percent by most prolific' author score: a red flag for possible editorial bias

(This is an evolving story: scroll to end of post for updates; most recent update 19th Sept 2020)

This week has seen a strange tale unfold around the publication practices of Professor Mark Griffiths of Nottingham Trent University. Professor Griffiths is an expert in the field of behavioural addictions, including gambling and problematic internet use. He publishes prolifically, and in 2019 published 90 papers, meeting the criterion set by Ioannidis et al (2018) for a hyperprolific author.

More recently, he has published on behavioural aspects of reactions to the COVID-19 pandemic, and he is due to edit a special issue of the International Journal of Mental Health and Addiction (IJMHA) on this topic.

He came to my attention after Dr Brittany Davidson described her attempt to obtain data from a recent study published in IJMHA reporting a scale for measuring fear of COVID-19. She outlined the sequence of events on PubPeer.  Essentially Griffiths, as senior author, declined to share the data, despite there being a statement in the paper that the data would be available on request. This was unexpected, given that in a recent paper about gaming disorder research, Griffiths had written:
'Researchers should be encouraged to implement data-sharing procedures and transparency of research procedures by pre-registering their upcoming studies on established platforms such as the Open Science Framework (https://osf.io). Although this may not be entirely sufficient to tackle potential replicability issues, it will likely increase the robustness and transparency of future research.'
It is not uncommon for authors to be reluctant to share data if they have plans to do more work on a dataset, but one would expect the journal editor to take seriously a breach of a statement in the published paper. Dr Davidson reported that she did not receive a reply from Masood Zangeneh, the editor of IJMHA.

This lack of editorial response is concerning, especially given that the IJMHA is a member of the Committee on Publication Ethics (COPE) and Prof Griffiths is an Advisory Editor for the journal. When I looked further, I found that in the last five years, out of 644 articles and reviews published in the journal, 80 (12.42%) have been co-authored by Griffiths. Furthermore, he was co-author on 51 of 384 (13.28%) of articles in the Journal of Behavioral Addictions (JBA). He is also on the editorial board of JBA, which is edited by Zsolt Demetrovics, who has coauthored many papers with Griffiths.

This pattern may have an entirely innocent explanation, but public confidence in the journals may be dented by such skew in authorship, given that editors have considerable power to give an easy ride to papers by their friends and collaborators. In the past, I found a high rate of publication by favoured authors in certain journals was an indication of gaming by editors, detectable by the fact that papers by favoured authors had acceptance times far too short to be compatible with peer review. Neither IJMHA nor JBA publishes the dates of submission and acceptance of articles, and so it is not possible to evaluate this concern.

We can however ask, how unusual is it for a single author to dominate the profile of publications in a journal? To check this out, I did an analysis as follows:

1. I first identified a set of relevant journals in this field of research, by identifying papers that cited Griffiths' work. I selected journals that featured at least 10 times on that list. There were 99 of these journals, after excluding two big generalist journals (PLOS One and Scientific Reports) and one that was not represented on Web of Science.

2. Using the R package, wosr, I searched on Web of Science for all articles and reviews published in each journal between 2015 and 2020.

This gave results equivalent to a manual search such as: PUBLICATION NAME: (journal of behavioral addictions) AND DOCUMENT TYPES: (Article OR Review) Timespan: 2015-2020. Indexes: SCI-EXPANDED, SSCI, A&HCI, CPCI-S, CPCI-SSH, BKCI-S, BKCI-SSH, ESCI, CCR-EXPANDED, IC.

3. Next I identified the most prolific author for each journal, defined as the author with the highest number of publications in each journal for the years 2015-2020.

4. It was then easy to compute the percentage of papers in the journal that included the most prolific author. The same information can readily be obtained by a manual search on Web of Science by selecting Analyse Results and then Authors – this generates a treemap as in Figure 1.
Figure 1: Screenshot of 'Analyse Results' from Web of Science

A density plot of the distribution of these 'percent by most prolific' scores is shown in Figure 2, and reveals a bimodal distribution with a small hump at the right end corresponding to journals where 8% or more articles are contributed by a single prolific author. This hump included IJMHA and JBA.

Figure 2: Distribution of % papers by most prolific author for 99 journals

This exercise confirmed my impression that these two journals are outliers in having such a high proportion of papers contributed by one author – in this case Griffiths - as shown in Figure 3. It is noteworthy that a few journals have authors who contributed a remarkably high number of papers, but these tended to be journals with very large numbers of papers (on the right hand side of Figure 3), and so the proportion is less striking. The table corresponding to Figure 3, and the script used to generate the summary data, are available here.

Figure 3: Each point corresponds to one journal: scatterplot shows the N papers and percentage of papers contributed by the most prolific author in that journal

I then repeated this same procedure for the journals involved in bad editorial practices that I featured in earlier blogposts. As shown in Table 1, this 'percent by most prolific' score was also unusually high for those journals during the period when I identified overly brief editorial decision times, but has subsequently recovered to something more normal under new editors. (Regrettably, the publishers have taken no action on the unreviewed papers in these journals, which continue to pollute the literature in this field.)

JournalYear range N articlesMost prolific author% by prolific
Research in Developmental Disabilities2015-2019972Steenbergen B1.34

2010-20141665Sigafoos J3.78

2005-2009337Matson JL9.2

2000-2004173Matson JL8.09
Research in Autism Spectrum Disorders2015-2019448Gal E1.34

2010-2014777Matson JL10.94

2005-2009182Matson JL15.93
J Developmental and Physical Disabilities2015-2019279Bitsika V4.3

2010-2014226Matson JL10.62

2005-2009187Matson JL9.63

2000-2004126Ryan B3.17
Developmental NeuroRehabilitation2015-2019327Falkmer T3.98

2010-2014252Matson JL13.89

2005-200973Haley SM5.48


Table 1: Analysis of 'percentage by most prolific' publications in four journals with evidence of editorial bias. Those with '% most prolific' scores > 8 are shown in pink.

Could the 'percent by most prolific' score be an indicator of editorial bias? This cannot be assumed: it could be the case that Griffiths produces an enormous amount of high quality work, and chooses to place it in one of two journals that have a relevant readership. Nevertheless, this publishing profile, with one author accounting for more than 10% of the papers in two separate journals, is unusual enough to raise a red flag that the usual peer review process might have been subverted. That flag could easily be lowered if we had information on dates of submission and acceptance of papers, or, better still, open peer review.

I will be writing to Springer, the publisher of IJMHA, and AK Journals, the publisher of JBA, to recommend that they investigate the unusual publication patterns in their journals, and to ask that in future they explicitly report dates of submission and acceptance of papers, as well as the identity of the editor who was responsible for the peer review process. A move to open peer review is a much bigger step adopted by some journals that has been important in giving confidence that ethical publishing practices are followed. Such transparent practices are important not just for detecting problems, but also for ensuring that question marks do not hang unfairly over the heads of authors and editors.

**Update** 20th July 2020.
AK Journals have responded with a very prompt and detailed account of an investigation that they have conducted into the publications in their journal, which finds no evidence of any preferential treatment of papers by Prof Griffiths. See their comment below.  Note also that, contrary to my statement above, dates of receipt/decision for papers in JBA are made public: I could not find them online but they are included in the pdf version of papers published in the journal.

**Update2** 21st July 2020
Professor Griffiths has written two blogposts responding to concerns about his numerous publications in JBA and IJMHA.
In the first, he confirms that the papers in both journals were properly peer-reviewed (as AK journals have stated in their response), and in the second, he makes the case that he met criteria for authorship in all papers, citing endorsements from co-authors.   
I will post here any response I get from IJMHA.  

**Update3** 19th Sept 2020

Springer publishers confirmed in July that they would be conducting an investigation into the issues with IJMHA but subsequent queries have not provided any information other than that the investigation is continuing. 

Meanwhile, it is good to see that the data from the original paper that sparked off this blogpost have now been deposited on OSF, and a correction regarding the results of that study has now also appeared in the journal.





Saturday 6 June 2020

Frogs or termites? Gunshot or cumulative science?


"Tell us again about Monet, Grandpa."

The tl;dr version of this post is that we're all so obsessed with doing new studies that we disregard prior literature. This is largely due to a scientific culture that gives disproportionate value to novel work. This, I argue, weakens our science.

This post has been brewing in my mind ever since I took part in a reading group about systematic reviews. We were discussing the new NIRO guidelines for systematic reviews outside the clinical trials context that are under development by Marta Topor and Jade Pickering. I'd been recommending systematic review as a useful research contribution that could be undertaken when other activities had stalled because of the pandemic. But the enthusiasm of those in the reading group seemed to wane as the session progressed. Yes, everyone agreed, the guidelines were excellent: clear and comprehensive. But it was evident that doing a proper review would not be a "quick win"; the amount of work would of course depend on the number of papers on a topic, but even for a circumscribed subject it was likely to be substantial and involve close reading of a lot of material. Was it a good use of time, people asked. I defended the importance of looking at past literature: it's concerning if we don't read scientific papers because we are all so busy writing them. To my mind, being a serious scholar means being very familiar with past work in a subject area. However, it's concerning that our reward system doesn't value that, making early-career researchers nervous about investing time in it.

The thing that prompted me to put my thoughts into words was a tweet I saw this morning by Mike Johansen (@mikejohansenmd). It seems at first to be on an unrelated topic, but I think it is another symptom of the same issue: a disregard for prior literature. Mike wrote:
Manuscripts should look like: Question: Methods: Results: Limitations: Figures/Tables: Who does these things? Things that don't matter: introduction, discussion. Who does these things?
I replied that he seemed to be recommending that we disregard the prior literature, which I think is a bad idea. I argued "One study is never enough to answer a question - important to consider how this study fits in - or if it doesn't , why."

Noah Haber (@noahhaber) jumped in at this point to say: 
I'm sympathetic (~45% convinced) to the argument that literature reviews in introductions do more harm than good. In practice, they are rarely more than cursory and uncritical, and make us beholden to ideas that have long outlived their usefulness. Space better used in methods.
But I don't think that's a good argument. I'm the first to agree that literature reviews are usually terrible: people only cite the work that confirms their position, and often do that inaccurately. You can see slides from a talk I gave on 'Why your literature review should be systematic' here. But I worry if the response to current unscholarly and biased approaches to the literature is to say that we can just disregard the literature. If you assume that the study you are doing is so important that you don't have time to read other people's studies, it is on the one hand illogical (if we all did that, who would read your studies), on the other hand disrespectful to fellow scientists, and on the most important third hand (yes, assume a mutant for now) bad for science.

Why is it bad for science? Because science seldom advances by a single study. Solid progress is made when work is cumulative. We have far more confidence in a theory that is supported by a series of experiments than by a single study, however large the effect. Indeed, we know that studies heralding a novel result often overestimate the size of effect – the "winner's curse". So to interpret your study, I want to know how far it is consistent with prior work, and if it isn't whether there might be a good reason for that.

Alas, this approach to science is discouraged by many funders and institutions: calls for research proposals are peppered with words such as "groundbreaking", "transformational", and "novel". There is a horror of doing work that is merely "cumulative". As a consequence, many researchers hop around like frogs in a lilypond, trying to land on a lilypad that is hiding buried treasure. It may sound dull, but I think we should model ourselves more on termites – we can only build an impressive edifice if we collaborate to each do our part and build on what has gone before.

Of course, the termite mound approach is a disaster if the work we try to build on is biased, poorly conducted and over-hyped. Unfortunately that is often the case, as noted by Noah. We come rather full circle here, because I think a motivation for Mike and Noah's tweets is recognition of the importance of reporting work in a way that will make it a solid foundation for a cumulative science of the future. I'm in full agreement with that. Where I disagree, though, is in how we integrate what we are doing now with what has gone before. We do need to see what we are doing as part of a cumulative, collaborative process in taking ideas forward, rather than a series of single-shot studies.

Friday 29 May 2020

Boris Bingo: Strategies for (not) answering questions


On Wednesday 27th May, the Prime Minister, Boris Johnson, appeared before the House of Commons Liaison Committee, to answer questions about the coronavirus crisis. The Liaison Committee is made up of all the Chairs of Select Committees, which are where much of the serious business of government is done. The proceedings are available online, and contrast markedly with Hansard reports from the House of Commons, where the atmosphere is typically gladiatorial, with a lot of political point-scoring. In Select Committees, members from a mix of parties aim to work constructively together. It is customary for the Prime Minister to give evidence to the Liaison Committee three times a year, but this was Boris Johnson's first appearance.

The circumstances were extraordinary. The PM himself did not look well: perhaps not surprising when one considers that he was in intensive care with COVID-19 in April, only leaving hospital on 12th April, with a new baby born on 29th April. Since then, the UK achieved the dubious distinction of having one of the worst rates of COVID-19 infection in the world. Then, last weekend a scandal broke around Dominic Cummings, Chief Advisor to the PM, who gave a Press Conference on Monday to explain why he had been travelling around the country with his wife and son, when both he and his wife had suspected COVID-19.

I watched the Liaison Committee live on TV and was agog. There had been fears that the Chair, Bernard Jenkin, would give the PM an easy time. He did not; he chaired impeccably, ensuring committee members stuck to time and that the PM stuck to the point. Questions were polite but challenging, regardless of the political affiliation of the committee member. Did the PM rise to the challenge? This was not the sneering, combative PM that we saw in Brexit debates – he, no doubt, could see that would not go down well with this committee. Rather, the impression he gave was of a man who was winging it and relying on his famous charm in the hope that bluster and bonhomie would win the day. Alas, they did not.

Intrigued by Johnson's strategy – if it can be called that – for answering questions,  I have spent some time poring over the transcript of the proceedings, and realised in so doing that I have the material for a new Bingo game. When watching the PM answer questions, you have a point for each of the following strategies you identify. If you do the drinking game version, it may ease the angst otherwise generated by listening to the leader of our nation.

Paltering

This term refers to a common strategy of politicians of appearing to answer a question, without actually doing so. It can give at least a superficial impression that the question has been answered, while deflecting to a related topic. In the following exchange, 'I have no reason to believe' is a big red flag for paltering. The Chair asked what advice the PM had sought from the Cabinet Secretary about Cummings' behaviour in relation to compliance with the code of integrity, and the PM replied:
I have no reason to believe that there is any dissent from what I said a few days ago.
Asked whether Scottish and Welsh first ministers had any influence on the approach to lockdown (Q14)
Stephen, we all work together, and I listen very carefully to what Mark says, to what Arlene and Michelle say, to what Nicola says. Of course we think about it together.
Response to Jeremy Hunt on why there were delays in implementing testing
As you know, Jeremy, we faced several difficulties with this virus. First, this was a totally new virus and it had some properties that everybody was quite slow to recognise across the world. For instance, it is possible to transmit coronavirus when you are pre-symptomatic—when you do not have symptoms—and I do not think people understood that to begin with.
When Hunt later asked the straightforward question "Why don't we get our test results back in 24 hours", (Q45) Johnson replied:
That is a very good question. Actually, we are reducing the time—the delay—on getting your test results back. I really pay tribute to Dido Harding and her team. The UK is now testing more people than any other country in Europe. She has got a staff now of 40,000 people, with 7,500 clinicians and 25,000 trackers in all, and they are rapidly trying to accelerate the turnaround time.
When asked by Caroline Nokes about the specific impact of phased school opening on women's ability to get back to work (Q73), Johnson answered a completely different question:
I think your question, Caroline, is directed at whether or not we have sufficient female representation at the top of Government helping us to inform these decisions, and I really think we have

Vagueness
This could take the form of bland agreement with the questioner, but without any clear commitment to action. Greg Clark asked (Q27) why we have a policy of 2 meters for social distancing when the WHO recommends 1 meter. The response was
...You are making a very important point, one that I have made myself several times—many times—in the course of the debates that we have had.
Pressed further on whether he had asked SAGE whether the 2-meter rule could be revised (Q32) he replied
I can not only make that commitment—I can tell you that I have already done just that, so I hope we will make progress.
Asked about firms who put their employees on furlough and then threatened them with redundancy (Q99), Johnson agreed this was a Very Bad Thing, but did not actually undertake to do anything about it.
...You are raising a very important point, Huw. This country is nothing without its workforce—its labour. We have to look after people properly, and I am well aware of some of the issues that are starting to arise. People should not be using furlough cynically to keep people on their books and then get rid of them. We want people back in jobs. We want this country back on its feet. That is the whole point of the furlough scheme.
Asked about how the Cabinet were consulted about the unprecedented Press Conference by Cummings (Q9), the PM was remarkably vague, replying:
...I thought that it would be a very good thing if people could understand what I had understood myself previously, I think on the previous day, about what took place—and there you go. We had a long go at it.
Asked to be specific about advice to parents who are in the same situation as Dominic Cummings re childcare (Q 21)
...The clear advice is to stay at home unless you absolutely have to go to work to do your job. If you have exceptional problems with childcare, that may cause you to vary your arrangements; that is clear.
The use of the word 'clear' in the PM responses is often a flag for vagueness.

A direct question by Greg Clark on whether contact tracing was compulsory or advisory (Q34) led to a confused answer:
We intend to make it absolutely clear to people that they must stay at home, but let me be clear—
When the questioner followed up to ask whether it was law or advice, he continued:
We will be asking people to stay at home. If they do not follow that advice, we will consider what sanctions may be necessary—financial sanctions, fines or whatever.
It is not always easy to distinguish vagueness from paltering. The PM has a tendency to agree that something is a Very Good Thing, to speak in glowing and over-general terms about initiatives, and about his desire to implement them, without any clear commitment to do more than 'looking at' them. Here he is responding to Robert Halfon on whether there will be additional resources for children whose education has been adversely affected by the shutdown (Q63)
The short answer is that I want to support any measures we can take to level up. You know what we want to do in this Government. There is no doubt that huge social injustice is taking place at the moment because some kids are going to have better access to tutoring and to schooling at home, and other kids are not going to get nearly as much, and that is not fair.
and again, when Halfon asked about apprenticeships (Q64)
All I will say to you, Rob, is that I totally agree that apprenticeships can play a huge part in getting people back on to the jobs market and into work, and we will look at anything to help people.
Halfon pressed on, asking for an apprenticeship guarantee, but the PM descended further into vagueness.
We will be doing absolutely everything we can to get people into jobs, and I will look at the idea of an apprenticeship guarantee. I suppose it is something that we would have to work with employers to deliver.
Other examples came from answers to Darren Jones, who asked about financial support to different sectors, and payments after the furlough scheme ended; e.g. the response to Q89:
We are going to do everything we can, Darren, to get everybody back into work.

Deferral

This was the first strategy to appear, in response to a question by the Chair (Q2) about when the committee might expect to see him again. Johnson made it clear he wasn't going to commit to anything:
You are very kind to want to see me again more frequently, even before we have completed this session, but can I possibly get back to you on that? Obviously, there is a lot on at the moment.
Stephen Timms asked about people who were destitute because, despite having leave to remain, they had no recourse to public funds when they suddenly lost their jobs (Q68). The PM responded:
I am going to have to come back to you on that, Stephen.
It is perhaps unfair to count this one as deliberate strategy: Johnson seemed genuinely baffled as to how 100,000 children could be living in destitution in a civilised country.

When asked by Mel Stride about whether there would be significant increases in the overall tax burden, the PM replied:
I understand exactly where you are going with your question, Mel, but I think you are going to have to wait, if you can, until the Chancellor, Rishi Sunak, brings forward his various proposals.

Refusal to answer

Refusals were mostly polite. An illustration appeared early in the proceedings, when asked by the Chair about Dominic Cummings (Q6), the PM replied:
I do think that is a reasonable question to ask, but as I say, we have a huge amount of exegesis and discussion of what happened in the life of my adviser between 27 March and 14 April. Quite frankly, I am not certain, right now, that an inquiry into that matter is a very good use of official time. We are working flat out on coronavirus.
So the question is accepted as reasonable, but we are asked to understand that it is not high priority for a PM in these challenging times.

Asked by Meg Hillier whether the Cabinet Secretary should see evidence provided by Cummings, the PM responds that this is inappropriate – again arguing this would be a distraction from higher priorities:
I think, actually, I would not be doing my job if I were now to shuffle this problem into the hands of officials, who are—believe me, Meg—working flat out to deal with coronavirus, as the public would want.
At times, when paltering had been detected, and a follow-up question put him on the spot, Johnson simply dug his heels in, often claiming to have already answered the question. Asked whether the Cabinet Secretary has interviewed Cummings (Q8), Johnson replied:
I am not going to go into the discussions that have taken place, but I have no reason to depart from what I have already said.
And asked whether he'd seen evidence to prove that allegations about Cummings were false (Q17), the PM again replied:
I don’t want to go into much more than I have said—
Asked by Jeremy Hunt on when a 24-hour test turnaround time would be met (Q48), the rather remarkable reply was:
I am not going to give you a deadline right now, Jeremy, because I have been forbidden from announcing any more targets and deadlines.

Challenge questioner

This strategy where unwelcome questions were dismissed as either having false premises, and/or being politically motivated. Pete Wishart (SNP) asked if Cummings' behaviour would make people less likely to obey lockdown rules (Q10). Johnson did not engage with the question, denied any wrongdoing by Cummings and added:
Notwithstanding the various party political points that you may seek to make and your point about the message, I respectfully disagree.
Similar phrases are seen in response to Yvette Copper (Q24), who was accused of political point-scoring, and then blamed for confusing the British public (see also Churchillian gambit, below):
I think that this conversation, to my mind, has illuminated why it is so important for us to move on, and be very clear with the British public about how we want to deal with that, and how we want to make progress. And, frankly, when they hear nothing but politicians squabbling and bickering, it is no wonder that they feel confused and bewildered.
And in response to a similar point from Simon Hoare (Q25)
...what they [the people] want now is for us to focus on them and their needs, rather than on a political ding-dong about what one adviser may or may not have done

False claim

This doesn't always involve lying; it can be unclear whether or not the PM knows what is actually the case. But there was at least one instance in his evidence where what he said is widely reported as untrue. Intriguingly, this was not an answer to a direct question, but rather an additional detail when asked about testing in care homes by Jeremy Hunt (Q44)
Do not forget that, as Chris Hopson of NHS Providers has said, every discharge from the NHS into care homes was made by clinicians, and in no case was that done when people were suspected of being coronavirus victims. Actually, the number of discharges from the NHS into care homes went down by 40% from January to March, so it is just not true that there was some concerted effort to move people out of NHS beds into care homes. That is just not right.
A report by ITV news asserted that, contrary to this claim, places in hospitals were block booked for discharged NHS patients.

The Churchillian gambit

When allowed to divert from answering questions, the PM would attempt the kind of rhetoric that had been so successful in Brexit debates, referring to what 'the people' wanted, and to government attempts to 'defeat the virus'.
For instance, this extended response to Q9 re Dominic Cummings;
What we need to do really is move on and get on to how we are going to sort out coronavirus, which is really the overwhelming priority of the people of this country
After a lengthy inquisition by Yvette Cooper, culminating in a direct question about whether he put Dominic Cummings above the national interest (Q24), we again had the appeal to what the British public want.
I think my choice is the choice that the British people want us all to make, Yvette, and that is, as far as we possibly can, to lay aside party political point-scoring, and to put the national interest first, and to be very clear with the British public about what we want to do and how we want to take this country forward.
Overall, there were four mentions of 'getting the country back on its feet', including this statement, appended to a question on whether sanctions would be needed to ensure compliance with contact tracing (Q58)
Obviously, we are relying very much on the common sense of the public to recognise the extreme seriousness of this. This is our way out. This is our way of defeating the virus and getting our country back on its feet, and I think people will want to work together-
And in response to a further request for clarification about Dominic Cummings from Darren Jones (Q94)
It is my strong belief that what the country wants is for us to be focusing on how to go forward on the test and trace scheme that we are announcing today, and on how we are going to protect their jobs and livelihoods, and defeat this virus.
In all these exchanges, the 'British people' are depicted as decent, long-suffering people, who are having a bad time, and may be anxious or confused. During Brexit debates, this might have worked, but the problem is that now a large percentage of people of all political stripes are just plain angry, and telling them that they want to 'move on' just makes them angrier.

Ironic politeness

The final characterstic has less to do with content of answers than with their style. British political discourse is a goldmine for researchers in pragmatics – the study of how language is used. Attacking your opponent in obsequiously polite language has perhaps arisen in response to historical prohibitions on uncivil discourse in the House of Commons. Boris Johnson is a master of this art, which can be used to put down an opponent while getting a laugh from the audience. He had to be careful with the Liaison Committee, but his comments that they were 'kind to want to see me' and that he was 'delighted to be here today' were transparently insincere, and presumably designed to amuse the audience while establishing his dominance as someone who could choose whether to attend or not.

The final exchange between the Chair and the PM was priceless. The PM reiterated his enjoyment of his session with the committee but refused to undertake to return, because he was 'working flat out to defeat coronavirus and get our country back on its feet'. The Chair replied:
I should just point out that the questions on which you hesitated and decided to go away and think were some of the most positive answers you gave, in some respects. That is where we want to help. I hope you will come back soon.
I read that to mean, on the one hand, most answers were useless, but on the other hand, where the PM had pleaded for deferral, he would be held to account, and required to provide responses to the Committee in future. We shall see if that happens.

Wednesday 13 May 2020

Manipulated images: hiding in plain sight?


Many years ago, I took a taxi from Manchester Airport to my home in Didsbury. It’s a 10 minute drive, but the taxi driver took me on a roundabout route that was twice as long. I started to query this as we veered off course, and was given a rambling story about road closures. I paid the fare but took a note of his details. Next day, having confirmed that there were no road closures, I wrote to complain to Manchester City Council.  I was phoned up by a man from the council who cheerfully told me that this driver had a record of this kind of thing, but not to worry, he’d be made to refund me by sending me a postal order for the difference in correct fare and what I’d paid. He sounded quite triumphant about this, because, as he explained, it would be tedious for the driver to have to go to a Post Office.

What on earth does this have to do with manipulated images? Well, it’s a parable for what happens when scientists are found to have published papers in which images with crucial data have been manipulated. It seems that typically, when this is discovered, the only consequence for the scientists is that they are required to put things right. So, just as with the taxi driver, there is no incentive for honesty. If you get caught out, you can just make excuses (oh, I got the photos mixed up), and your paper might have a little correction added. This has been documented over and over again by Elisabeth Bik: you can hear a compelling interview with her on the Everything Hertz podcast here.

There are two things about this that I just don’t get. First, why do people take the risk? I work with data in the form of numbers rather than images, so I wonder if I missing something. If someone makes up numbers, that can be really hard to detect (though there are some sleuthing methods available). But if you publish a paper with manipulated images, the evidence of the fraud is right there for everyone to see. In practice, it was only when Bik appeared on the scene, with her amazing ability to spot manipulated images, that the scale of the problem became apparent (see note below). Nevertheless, I am baffled that scientists would leave such a trail of incriminating evidence in their publications, and not worry that at some future date, they’d be found out.

But I guess the answer to this first question is contained within the second: why isn’t image manipulation taken more seriously? It’s depressing to read how time after time, Bik has contacted journals to point out irregularities in published images only to be ignored. The minority of editors who do decide to act behave like Manchester City Council: the authors have to put the error right, but it seems there are no serious consequences. And meanwhile, like many whistleblowers, far from being thanked for cleaning up science, Elisabeth has suffered repeated assaults on her credibility and integrity from those she has offended.

This week I saw the latest tale in this saga: Bik tweeted about a paper published in Nature that was being taken seriously in relation to treatment for coronavirus. Something in me snapped and I felt it was time to speak out. Image manipulation is fraud. If authors are found to have done it, the paper should be retracted and they should be banned from publishing in that journal in future. I call on the ‘high impact’ journals such as Nature to lead the way in implementing such a policy. I’d like to see some sanctions from institutions and funders as well, but I’ve learned that issues like this need a prolonged campaign to achieve small goals.

I’d be the first to argue that scientists should not be punished for honest errors (see this paper, or free preprint version). It's important to recognise that we are all fallible and prone to make mistakes. I can see how it is possible that someone might mix up two images, for instance. But in many of the cases detected by Elisabeth, part of one image is photoshopped into another, and then resized or rotated. I can’t see how this can be blamed on honest error. The only defence that seems left for the PI is to blame a single rogue member of the lab. If someone they trust is cooking the data, an innocent PI could get implicated in fraud unwittingly. But the best way to avoid that is to have a lab culture in which honesty and integrity are valued above Nature papers. And we’ll only see such a culture become widespread if malpractice has consequences.

Hiding in Plain Sight’ is a book by Sarah Kendzior that covers overt criminality in the US political scene, which the author describes as ‘a transnational crime syndicate masquerading as a government’. The culture can be likened to that seen in some areas of high-stakes science. The people who manipulate figures don’t worry about getting found out, because they achieve fame and grants, with no apparent consequences, even when the fraud is detected.

Notes (14th May 2020)
1. Coincidentally, a profile of Elisabeth Bik appeared in Nature the same day as this blogpost https://www.nature.com/articles/d41586-020-01363-z
2. Correction: Both Elisabeth Bik and Boris Barbour (comment below) pointed out that she was not the first to investigate image manipulation:

Friday 8 May 2020

Attempting to communicate with the BBC: A masterclass in paltering

My original blogpost from October 2019 is immediate below - scroll to the end for latest response from BBC

A couple of weeks ago there was an outburst of public indignation after it emerged that the BBC had censured their presenter Naga Munchetty. As reported by the Independent, in July BBC Breakfast reported on comments made by President Trump to four US congresswomen, none of whom was white, whom he told to "go back and help fix the totally broken and crime-infested places from which they came." Naga commented "Every time I've been told as a woman of colour to 'go home', to 'go back to where I've come from', that was embedded in racism."

Most of the commentary at the time focused on whether or not Naga had behaved unprofessionally in making the comment, or whether she was justified in describing Trump's comment as racist. The public outcry has been heard: the Director General of the BBC, Tony Hall, has since overturned the decision to censure her.

There is, however, another concern about the BBC's action, which is why did they choose to act on this matter in the first place. All accounts of the story talk of 'a complaint'. The BBC complaints website explains that they can get as many as 200,000 complaints every year, which averages out at 547 a day. Now, I would have thought that they might have some guidelines in place about which complaints to act upon. In particular, they would be expected to take most seriously issues about which there were a large number of complaints. So it seems curious, to say the least, if they had decided to act on a single complaint, and I started wondering whether it had been made by someone with political clout.

The complaints website allows you to submit a complaint or to make a comment, but not to ask a question, but I submitted some questions anyhow through the complaints portal, and this morning I received a response, which I append in full below. Here are my questions and the answers:

Q1. Was there really just ONE complaint?
BBC: Ignored

Q2: If yes, how often does the BBC complaints department act on a SINGLE complaint?
BBC: Ignored

Q3: Who made the complaint?
BBC: We appreciate you would like specific information about the audience member who complained about Naga's comments but we can't disclose details of the complainant, but any viewer or listener can make a complaint and pursue it through the BBC's Complaints framework.

Q4: If you cannot disclose identity of the complainant, can you confirm whether it was anyone in public life?
BBC: Ignored

Q5: Can you reassure me that any action against Munchetty was not made because of any political pressure on the BBC?
BBC: Ignored

I guess the BBC are so used to politicians not answering questions that they feel it is acceptable behaviour. I don't, and I treat evasion as evidence of hiding something they don't want us to hear. I was interested to see that Ofcom is on the case, but have been fobbed off just as I was. Let's keep digging. I smell a large and ugly rat.

Here is the full text of the response:

Dear Prof Bishop
Reference CAS-5652646-TNKKXL
Thank you for contacting the BBC.
I understand you have concerns about the BBC Complaints process specifically with regard to a complaint made regarding Breakfast presenter Naga Munchetty and comments about US President Trump. 
Naturally we regret when any member of our audience is unhappy with any aspect of what we do. We treat all complaints seriously, but what matters is whether the complaint is justified and the BBC acted wrongly. If so we apologise. If we don’t agree that our standards or public service obligations were breached, we try to explain why. We appreciate you would like specific information about the audience member who complained about Naga's comments but we can't disclose details of the complainant, but any viewer or listener can make a complaint and pursue it through the BBC's Complaints framework.
Nonetheless, I understand this is something you feel strongly about and I’ve included your points on our audience feedback report that is sent to senior management each morning and ensures that your complaint has been seen by the right people quickly. 
We appreciate you taking the time to register your views on this matter as it is greatly helpful in informing future decisions at the BBC.
Thanks again for getting in touch.
Kind regards
John Hamill
BBC Complaints Team

8th May 2020
 Well, I had complained again, to say the original complaint did not address the points raised. Nothing happened until today, 7 months later, when out of the blue I received another email. The evasion continues.  The answers provide a masterclass in what is known as paltering - here's an article by the BBC explaining what that is. The story, of course, is now so old it will be buried, but I'm minded to conclude that the continuing failure to answer my questions means that this case was escalated on the basis of one complaint by a Very Important Person.

From BBC Complaints, 8th May 2020
Reference CAS-5652646-TNKKXL 

Dear Ms Bishop,

Thank you for getting back in touch with us and please accept our apologies for the long and regrettable delay in responding.

Our initial response didn’t address all of the specific concerns you raised, so we’d like to offer you a further response here addressing your four other questions.


1) Was there really just ONE complaint?
As widely reported in the media, one complaint was escalated to our Executive Complaints Unit (ECU).

2) If yes, how often does the BBC complaints department act on a SINGLE complaint?
Anyone can proceed through the BBC Complaints Framework and take their complaint to the ECU. Ultimately what matters is whether the complaint is justified and each complaint is judged on its own merit - sometimes complaints that go to the ECU are individual, sometimes more than one audience member will make a complaint about the same broadcast.

However, it is worth noting that the number of complaints are not the key factor and our main concern is whether the BBC acted wrongly. Full detail of the ECU’s findings can be found via the links below:

Recent ECU Findings:
http://www.bbc.co.uk/complaints/comp-reports/ecu/

Archived ECU reports:
http://www.bbc.co.uk/complaints/comp-reports/ecu-archive/

3) If you cannot disclose identity of the complainant, can you confirm whether it was anyone in public life?
For reasons of confidentiality, and our responsibility to protect the identity of an individual who complained, we won't be providing any information about them.

4) Can you reassure me that any action against Munchetty was not made because of any political pressure on the BBC?
We can assure you of this. The BBC is independent, and the ECU came to their judgement based on the merits of the case before them, not as a result of any pressure or lobbying.