Monday, 27 July 2020
TEF in the time of pandemic
First, the fact that the Pearce Review has not been published is reminiscent of the Government's strategy of sitting on reports that it finds inconvenient. I think we can assume the report is not a bland endorsement of TEF, but rather that it did identify some of the fundamental statistical problems with the methodology of TEF, all of which just get worse when extended down to subject-level TEF. My own view is that subject-level TEF would be unworkable. If this is what the report says, then it would be an embarrassment for government, and a disappointment for universities who have already invested in the exercise. I'm not confident that this would stop TEF going ahead, but this may be a case where after so many changes of minister, the government would be willing to either shelve the idea (the more sensible move) or just delay in the hope they can overcome the problems.
Second, the whole nature of teaching has changed radically in response to the pandemic. Of course, we are all uncertain of the future, and institutions vary in terms of their predictions, but what I am hearing from the experts in pandemics is that it is wrong to imagine we are living through a blip after which we will return to normal. Some staff are adapting well to the demand for online teaching, but this is going to depend on how far teaching requires a practical element, as well as on how tech-savvy individual teaching staff are. So, if much teaching stays online, then we'd be evaluating universities on a very different teaching profile than the one assessed in TEF.
Finally, there is wide variation in how universities are responding to the impact of the pandemic on staff. Some are making staff redundant, especially those on short-term contracts, and many are in financial difficulties. Jobs are being frozen. Even in well-established universities such as my own, there are significant numbers of staff who are massively impacted by having children to care for at home. Overall, what it means is that the teaching that is delivered is not only different in kind, but actual and effective staff/student ratios are likely to go down.
So my bottom line is that even if the TEF methodology worked (and it doesn't), it's not clear that the statistics used for it would be relevant in future. I get the impression that some HEIs are taking the approach that the show must go on, with regard to both REF and TEF, because they have substantial sunk costs in these exercises (though more for REF than TEF). But staff are incredibly hard-pressed in just delivering teaching and I think enthusiasm for TEF, never high, is at rock bottom right now.
At the annual lecture of the Council for Defence of British Universities in 2018 I argued that TEF should have been strangled at birth. It has struggled on in a sickly and miserable state since 2015. It is now time to put it out of its misery.
Sunday, 12 July 2020
'Percent by most prolific' author score: a red flag for possible editorial bias
This week has seen a strange tale unfold around the publication practices of Professor Mark Griffiths of Nottingham Trent University. Professor Griffiths is an expert in the field of behavioural addictions, including gambling and problematic internet use. He publishes prolifically, and in 2019 published 90 papers, meeting the criterion set by Ioannidis et al (2018) for a hyperprolific author.
More recently, he has published on behavioural aspects of reactions to the COVID-19 pandemic, and he is due to edit a special issue of the International Journal of Mental Health and Addiction (IJMHA) on this topic.
He came to my attention after Dr Brittany Davidson described her attempt to obtain data from a recent study published in IJMHA reporting a scale for measuring fear of COVID-19. She outlined the sequence of events on PubPeer. Essentially Griffiths, as senior author, declined to share the data, despite there being a statement in the paper that the data would be available on request. This was unexpected, given that in a recent paper about gaming disorder research, Griffiths had written:
'Researchers should be encouraged to implement data-sharing procedures and transparency of research procedures by pre-registering their upcoming studies on established platforms such as the Open Science Framework (https://osf.io). Although this may not be entirely sufficient to tackle potential replicability issues, it will likely increase the robustness and transparency of future research.'It is not uncommon for authors to be reluctant to share data if they have plans to do more work on a dataset, but one would expect the journal editor to take seriously a breach of a statement in the published paper. Dr Davidson reported that she did not receive a reply from Masood Zangeneh, the editor of IJMHA.
This lack of editorial response is concerning, especially given that the IJMHA is a member of the Committee on Publication Ethics (COPE) and Prof Griffiths is an Advisory Editor for the journal. When I looked further, I found that in the last five years, out of 644 articles and reviews published in the journal, 80 (12.42%) have been co-authored by Griffiths. Furthermore, he was co-author on 51 of 384 (13.28%) of articles in the Journal of Behavioral Addictions (JBA). He is also on the editorial board of JBA, which is edited by Zsolt Demetrovics, who has coauthored many papers with Griffiths.
This pattern may have an entirely innocent explanation, but public confidence in the journals may be dented by such skew in authorship, given that editors have considerable power to give an easy ride to papers by their friends and collaborators. In the past, I found a high rate of publication by favoured authors in certain journals was an indication of gaming by editors, detectable by the fact that papers by favoured authors had acceptance times far too short to be compatible with peer review. Neither IJMHA nor JBA publishes the dates of submission and acceptance of articles, and so it is not possible to evaluate this concern.
We can however ask, how unusual is it for a single author to dominate the profile of publications in a journal? To check this out, I did an analysis as follows:
1. I first identified a set of relevant journals in this field of research, by identifying papers that cited Griffiths' work. I selected journals that featured at least 10 times on that list. There were 99 of these journals, after excluding two big generalist journals (PLOS One and Scientific Reports) and one that was not represented on Web of Science.
2. Using the R package, wosr, I searched on Web of Science for all articles and reviews published in each journal between 2015 and 2020.
This gave results equivalent to a manual search such as: PUBLICATION NAME: (journal of behavioral addictions) AND DOCUMENT TYPES: (Article OR Review) Timespan: 2015-2020. Indexes: SCI-EXPANDED, SSCI, A&HCI, CPCI-S, CPCI-SSH, BKCI-S, BKCI-SSH, ESCI, CCR-EXPANDED, IC.
3. Next I identified the most prolific author for each journal, defined as the author with the highest number of publications in each journal for the years 2015-2020.
4. It was then easy to compute the percentage of papers in the journal that included the most prolific author. The same information can readily be obtained by a manual search on Web of Science by selecting Analyse Results and then Authors – this generates a treemap as in Figure 1.
Figure 1: Screenshot of 'Analyse Results' from Web of Science
A density plot of the distribution of these 'percent by most prolific' scores is shown in Figure 2, and reveals a bimodal distribution with a small hump at the right end corresponding to journals where 8% or more articles are contributed by a single prolific author. This hump included IJMHA and JBA.
Figure 2: Distribution of % papers by most prolific author for 99 journals
This exercise confirmed my impression that these two journals are outliers in having such a high proportion of papers contributed by one author – in this case Griffiths - as shown in Figure 3. It is noteworthy that a few journals have authors who contributed a remarkably high number of papers, but these tended to be journals with very large numbers of papers (on the right hand side of Figure 3), and so the proportion is less striking. The table corresponding to Figure 3, and the script used to generate the summary data, are available here.
Figure 3: Each point corresponds to one journal: scatterplot shows the N papers and percentage of papers contributed by the most prolific author in that journal
I then repeated this same procedure for the journals involved in bad editorial practices that I featured in earlier blogposts. As shown in Table 1, this 'percent by most prolific' score was also unusually high for those journals during the period when I identified overly brief editorial decision times, but has subsequently recovered to something more normal under new editors. (Regrettably, the publishers have taken no action on the unreviewed papers in these journals, which continue to pollute the literature in this field.)
Journal | Year range | N articles | Most prolific author | % by prolific |
Research in Developmental Disabilities | 2015-2019 | 972 | Steenbergen B | 1.34 |
2010-2014 | 1665 | Sigafoos J | 3.78 | |
2005-2009 | 337 | Matson JL | 9.2 | |
2000-2004 | 173 | Matson JL | 8.09 | |
Research in Autism Spectrum Disorders | 2015-2019 | 448 | Gal E | 1.34 |
2010-2014 | 777 | Matson JL | 10.94 | |
2005-2009 | 182 | Matson JL | 15.93 | |
J Developmental and Physical Disabilities | 2015-2019 | 279 | Bitsika V | 4.3 |
2010-2014 | 226 | Matson JL | 10.62 | |
2005-2009 | 187 | Matson JL | 9.63 | |
2000-2004 | 126 | Ryan B | 3.17 | |
Developmental NeuroRehabilitation | 2015-2019 | 327 | Falkmer T | 3.98 |
2010-2014 | 252 | Matson JL | 13.89 | |
2005-2009 | 73 | Haley SM | 5.48 |
Table 1: Analysis of 'percentage by most prolific' publications in four journals with evidence of editorial bias. Those with '% most prolific' scores > 8 are shown in pink.
Could the 'percent by most prolific' score be an indicator of editorial bias? This cannot be assumed: it could be the case that Griffiths produces an enormous amount of high quality work, and chooses to place it in one of two journals that have a relevant readership. Nevertheless, this publishing profile, with one author accounting for more than 10% of the papers in two separate journals, is unusual enough to raise a red flag that the usual peer review process might have been subverted. That flag could easily be lowered if we had information on dates of submission and acceptance of papers, or, better still, open peer review.
I will be writing to Springer, the publisher of IJMHA, and AK Journals, the publisher of JBA, to recommend that they investigate the unusual publication patterns in their journals, and to ask that in future they explicitly report dates of submission and acceptance of papers, as well as the identity of the editor who was responsible for the peer review process. A move to open peer review is a much bigger step adopted by some journals that has been important in giving confidence that ethical publishing practices are followed. Such transparent practices are important not just for detecting problems, but also for ensuring that question marks do not hang unfairly over the heads of authors and editors.
**Update** 20th July 2020.
AK Journals have responded with a very prompt and detailed account of an investigation that they have conducted into the publications in their journal, which finds no evidence of any preferential treatment of papers by Prof Griffiths. See their comment below. Note also that, contrary to my statement above, dates of receipt/decision for papers in JBA are made public: I could not find them online but they are included in the pdf version of papers published in the journal.
**Update2** 21st July 2020
Professor Griffiths has written two blogposts responding to concerns about his numerous publications in JBA and IJMHA.
In the first, he confirms that the papers in both journals were properly peer-reviewed (as AK journals have stated in their response), and in the second, he makes the case that he met criteria for authorship in all papers, citing endorsements from co-authors.
I will post here any response I get from IJMHA.
**Update3** 19th Sept 2020
Springer publishers confirmed in July that they would be conducting an investigation into the issues with IJMHA but subsequent queries have not provided any information other than that the investigation is continuing.
Meanwhile, it is good to see that the data from the original paper that sparked off this blogpost have now been deposited on OSF, and a correction regarding the results of that study has now also appeared in the journal.