Sunday, 12 July 2020

'Percent by most prolific' author score: a red flag for possible editorial bias

(This is an evolving story: scroll to end of post for updates)

This week has seen a strange tale unfold around the publication practices of Professor Mark Griffiths of Nottingham Trent University. Professor Griffiths is an expert in the field of behavioural addictions, including gambling and problematic internet use. He publishes prolifically, and in 2019 published 90 papers, meeting the criterion set by Ioannidis et al (2018) for a hyperprolific author.

More recently, he has published on behavioural aspects of reactions to the COVID-19 pandemic, and he is due to edit a special issue of the International Journal of Mental Health and Addiction (IJMHA) on this topic.

He came to my attention after Dr Brittany Davidson described her attempt to obtain data from a recent study published in IJMHA reporting a scale for measuring fear of COVID-19. She outlined the sequence of events on PubPeer.  Essentially Griffiths, as senior author, declined to share the data, despite there being a statement in the paper that the data would be available on request. This was unexpected, given that in a recent paper about gaming disorder research, Griffiths had written:
'Researchers should be encouraged to implement data-sharing procedures and transparency of research procedures by pre-registering their upcoming studies on established platforms such as the Open Science Framework (https://osf.io). Although this may not be entirely sufficient to tackle potential replicability issues, it will likely increase the robustness and transparency of future research.'
It is not uncommon for authors to be reluctant to share data if they have plans to do more work on a dataset, but one would expect the journal editor to take seriously a breach of a statement in the published paper. Dr Davidson reported that she did not receive a reply from Masood Zangeneh, the editor of IJMHA.

This lack of editorial response is concerning, especially given that the IJMHA is a member of the Committee on Publication Ethics (COPE) and Prof Griffiths is an Advisory Editor for the journal. When I looked further, I found that in the last five years, out of 644 articles and reviews published in the journal, 80 (12.42%) have been co-authored by Griffiths. Furthermore, he was co-author on 51 of 384 (13.28%) of articles in the Journal of Behavioral Addictions (JBA). He is also on the editorial board of JBA, which is edited by Zsolt Demetrovics, who has coauthored many papers with Griffiths.

This pattern may have an entirely innocent explanation, but public confidence in the journals may be dented by such skew in authorship, given that editors have considerable power to give an easy ride to papers by their friends and collaborators. In the past, I found a high rate of publication by favoured authors in certain journals was an indication of gaming by editors, detectable by the fact that papers by favoured authors had acceptance times far too short to be compatible with peer review. Neither IJMHA nor JBA publishes the dates of submission and acceptance of articles, and so it is not possible to evaluate this concern.

We can however ask, how unusual is it for a single author to dominate the profile of publications in a journal? To check this out, I did an analysis as follows:

1. I first identified a set of relevant journals in this field of research, by identifying papers that cited Griffiths' work. I selected journals that featured at least 10 times on that list. There were 99 of these journals, after excluding two big generalist journals (PLOS One and Scientific Reports) and one that was not represented on Web of Science.

2. Using the R package, wosr, I searched on Web of Science for all articles and reviews published in each journal between 2015 and 2020.

This gave results equivalent to a manual search such as: PUBLICATION NAME: (journal of behavioral addictions) AND DOCUMENT TYPES: (Article OR Review) Timespan: 2015-2020. Indexes: SCI-EXPANDED, SSCI, A&HCI, CPCI-S, CPCI-SSH, BKCI-S, BKCI-SSH, ESCI, CCR-EXPANDED, IC.

3. Next I identified the most prolific author for each journal, defined as the author with the highest number of publications in each journal for the years 2015-2020.

4. It was then easy to compute the percentage of papers in the journal that included the most prolific author. The same information can readily be obtained by a manual search on Web of Science by selecting Analyse Results and then Authors – this generates a treemap as in Figure 1.
Figure 1: Screenshot of 'Analyse Results' from Web of Science

A density plot of the distribution of these 'percent by most prolific' scores is shown in Figure 2, and reveals a bimodal distribution with a small hump at the right end corresponding to journals where 8% or more articles are contributed by a single prolific author. This hump included IJMHA and JBA.

Figure 2: Distribution of % papers by most prolific author for 99 journals

This exercise confirmed my impression that these two journals are outliers in having such a high proportion of papers contributed by one author – in this case Griffiths - as shown in Figure 3. It is noteworthy that a few journals have authors who contributed a remarkably high number of papers, but these tended to be journals with very large numbers of papers (on the right hand side of Figure 3), and so the proportion is less striking. The table corresponding to Figure 3, and the script used to generate the summary data, are available here.

Figure 3: Each point corresponds to one journal: scatterplot shows the N papers and percentage of papers contributed by the most prolific author in that journal

I then repeated this same procedure for the journals involved in bad editorial practices that I featured in earlier blogposts. As shown in Table 1, this 'percent by most prolific' score was also unusually high for those journals during the period when I identified overly brief editorial decision times, but has subsequently recovered to something more normal under new editors. (Regrettably, the publishers have taken no action on the unreviewed papers in these journals, which continue to pollute the literature in this field.)

JournalYear range N articlesMost prolific author% by prolific
Research in Developmental Disabilities2015-2019972Steenbergen B1.34
2010-20141665Sigafoos J3.78
2005-2009337Matson JL9.2
2000-2004173Matson JL8.09
Research in Autism Spectrum Disorders2015-2019448Gal E1.34
2010-2014777Matson JL10.94
2005-2009182Matson JL15.93
J Developmental and Physical Disabilities2015-2019279Bitsika V4.3
2010-2014226Matson JL10.62
2005-2009187Matson JL9.63
2000-2004126Ryan B3.17
Developmental NeuroRehabilitation2015-2019327Falkmer T3.98
2010-2014252Matson JL13.89
2005-200973Haley SM5.48

Table 1: Analysis of 'percentage by most prolific' publications in four journals with evidence of editorial bias. Those with '% most prolific' scores > 8 are shown in pink.

Could the 'percent by most prolific' score be an indicator of editorial bias? This cannot be assumed: it could be the case that Griffiths produces an enormous amount of high quality work, and chooses to place it in one of two journals that have a relevant readership. Nevertheless, this publishing profile, with one author accounting for more than 10% of the papers in two separate journals, is unusual enough to raise a red flag that the usual peer review process might have been subverted. That flag could easily be lowered if we had information on dates of submission and acceptance of papers, or, better still, open peer review.

I will be writing to Springer, the publisher of IJMHA, and AK Journals, the publisher of JBA, to recommend that they investigate the unusual publication patterns in their journals, and to ask that in future they explicitly report dates of submission and acceptance of papers, as well as the identity of the editor who was responsible for the peer review process. A move to open peer review is a much bigger step adopted by some journals that has been important in giving confidence that ethical publishing practices are followed. Such transparent practices are important not just for detecting problems, but also for ensuring that question marks do not hang unfairly over the heads of authors and editors.

**Update** 20th July 2020.
AK Journals have responded with a very prompt and detailed account of an investigation that they have conducted into the publications in their journal, which finds no evidence of any preferential treatment of papers by Prof Griffiths. See their comment below.  Note also that, contrary to my statement above, dates of receipt/decision for papers in JBA are made public: I could not find them online but they are included in the pdf version of papers published in the journal.

**Update2** 21st July 2020
Professor Griffiths has written two blogposts responding to concerns about his numerous publications in JBA and IJMHA.
In the first, he confirms that the papers in both journals were properly peer-reviewed (as AK journals have stated in their response), and in the second, he makes the case that he met criteria for authorship in all papers, citing endorsements from co-authors.   
I will post here any response I get from IJMHA. 



8 comments:

  1. The editor of JBA is a frequent co-author of Mark Griffiths. Only this year, they have published 11 papers together, 4 of them in JBA.

    ReplyDelete
  2. Mark Griffiths has been alerted that he is plagiarising other people's work to inflate his own h-index. He has taken no steps to make a correction

    https://mrsuttonntu.wordpress.com/2018/06/16/griffiths-citations/

    ReplyDelete
  3. Dear Professor Bishop,

    Thank you for raising this issue and giving us a reason to look into and respond to these matters at the Journal of Behavioral Addictions (JBA). We at Akadémiai Kiadó take very seriously the publication processes of our journals and strive to undertake these endeavors with the highest integrity. We routinely review the processes and welcome the opportunity to do so presently with respect to the points raised. We thus appreciate your communications with us. Unfortunately, we only received your letter with some regrettable delay.

    In response to your blog, we have investigated the data that are available in our editorial system. We believe that the editorial system captures in a systematic fashion much editing and review information. Our analysis led to the following results.

    In contrast to the statement in the blogpost, the dates when manuscripts have been received by the journal, the dates of receiving revisions, and the dates of acceptance are published on the first page of all JBA papers. This has been our approach from the first issue and this information is in the public domain.

    By reviewing these dates for each publication, one can check the time intervals from initial submission to acceptance for published manuscripts. In case of papers co-authored by Professor Griffiths, 110 days on average transpired between the original submission and the final decision for submitted manuscripts. For the accepted papers, the shortest period to acceptance was 32 days and the longest time to acceptance was 349 days (decisions about rejection were sometimes made sooner). The average duration for Professor Griffiths’ accepted manuscripts to be reviewed is longer than the overall mean of 91.5 days for all papers submitted to the JBA. As such, these data do not support the notion that Professor Griffiths receives preferential rapid reviews.

    It is not the journal’s practice to publicly share the acceptance-to-rejection ratios according to named individuals; i.e., we do not share public records of all papers submitted to the journal identified by author names or a list of names with corresponding overall acceptance rates. However, this information is available for scrutiny by internal investigation. Following review of this data for Professor Griffiths, the journal was satisfied that his submissions were handled appropriately (including at least two peer reviews) and that his work (including both those co-authored by Prof. Griffiths and those not) did not receive preferential treatment with regard to acceptance rate. Moreover, our investigation revealed that papers co-authored by Prof. Griffiths had been handled by ten different associate editors, which systematically reduces the likelihood of any systematically inappropriate reviewing of these submissions.

    While the above investigation supported the notion that JBA’s review process was not biased regarding the evaluation of Prof. Griffiths’ works, we understand that criticisms towards Prof. Griffiths’ publication behavior have been raised. We take these criticisms seriously and believe that they should be looked into further in an appropriate manner. For this reason, we plan to initiate a more formal examination of claims regarding his scientific work to see if his membership in the editorial board of the JBA is consonant or not with the standards of the editorial board.

    ReplyDelete
  4. We believe that a researcher should not be restricted from publishing in a journal because he/she has an editorial role in that journal or because he/she collaborates with any editorial board members. In the case of most journals, editors also publish in the journal. Similarly, in the case of the JBA, many of the editorial board members and associate editors and the editor-in-chief publish papers in the journal, which we believe is acceptable and welcomed as long as an independent review process is secured. We wish to have top researchers in the field be on the editorial board and wish to have top researchers in the field publish in the journal. The important issue here is that the review process be independent which, based on our investigation, we believe to be the case at the JBA.

    Akadémiai Kiadó takes the publication process very seriously. We believe that the data support that our publication process is not biased. Nonetheless, we welcome the opportunity to review the information and to consider how we might continue to improve the process. We will consider making changes (e.g., publishing the names of the handling editor for published manuscripts) to help ensure that we adhere to high review and publishing standards. We consider this response as one step in this process (and not an all-encompassing response) and welcome further dialog on this matter.

    Yours sincerely,
    Gabriella Böhm
    Head of Journal Publishing
    Akadémiai Kiadó

    ReplyDelete
  5. Do you think you would have written this blog article, if MD was at Oxbridge rather than a post-92? Sometimes it's useful to be aware of our own biases.

    ReplyDelete
    Replies
    1. I assume you mean MG.
      Nice try, but I'm on record as being more than willing to criticise people from Oxbridge.

      Delete
  6. Yes, I mean MG (although it's actually MDG having looked at his wiki page).

    Thanks, although I wasn't trying anything - I just thought it was a point worth raising.

    Although Susan Greenfield isn't as much as a prolific author, more a pusher of pseudoscience - at least she was.

    ReplyDelete
  7. I had been passed this blog, and the subsequent responses and blogposts by a few Twitter friends. They had been pushing me to make a blog about looking into and exploring stats, especially around Open Science/Open Access as I'm always investigating stats around publishing, rates and time differences for journals that make this sort of information public. This proved to be a fairly good example to start with and look at - I wrote about what I found out about in the following blog post: https://praecoxcuriositas.blogspot.com/2020/07/the-curious-case-of-professor-mark-d.html

    ReplyDelete