Showing posts with label Springer. Show all posts
Showing posts with label Springer. Show all posts

Sunday, 6 December 2020

Faux peer-reviewed journals: a threat to research integrity

 

Despite all its imperfections, peer review is one marker of scientific quality – it indicates that an article has been evaluated prior to publication by at least one, and usually several, experts in the field. An academic journal that does not use peer review would not usually be regarded as a serious source and we would not expect to see it listed in a database such as Clarivate Analytic's Web of Science Core Collection which "includes only journals that demonstrate high levels of editorial rigor and best practice". Scientists are often evaluated by their citations in Web of Science, with the assumption that this will include only peer-reviewed articles. This makes gaming of citations harder than is the case for less selective databases such as Google Scholar. The selective criteria for inclusion, and claims by Clarivate Analytics to take research integrity very seriously, are presumably the main reasons why academic institutions are willing to pay for access to Web of Science, rather than relying on Google Scholar. 

Nevertheless, some journals listed in Web of Science include significant numbers of documents that are not peer-reviewed. I first became aware of this when investigating the publishing profiles of authors with remarkably high rates of publications in a small number of journals. I found that Mark Griffiths, a hyperprolific author who has been interviewed about his astounding rate of publication by the Times Higher Education, has a junior collaborator, Mohammed Mamun, who clearly sees Griffiths as a role model and is starting to rival him in publication rate. Griffiths is a co-author on 31 of 33 publications authored by Mamun since 2019. While still an undergraduate, Mamun has become the self-styled Director of the Undergraduate Research Organization in Dhaka, subsequently renamed as the Centre for Health Innovation, Networking, Training, Action and Research – Bangladesh. These institutions do not appear to have any formal link with an academic institution, though on ORCID, Mamun lists an ongoing educational affiliation to Jahangirnagar University. His H-index from Web of Science is 11. This drops if one excludes self-citations, which constitute around half of his citations, but nevertheless, this is remarkable for an undergraduate.

Of the 31 papers listed on Web of Science as coauthored by Mamun and Griffiths, 19 are categorised as letters to the Editor. Letters are a rather odd and heterogeneous category. In most journals they will be brief comments on papers published in the journal, or responses to such comments, and in such cases it is not unusual for the editor to make a decision to publish or not without seeking peer review. However, the letters coauthored by Griffiths and Mamun go beyond this kind of content, and include some brief reports of novel data, as well as case reports on suicide or homicide gleaned from press reports*. I took a closer look at three journals where these letters appeared to try and understand how such material fitted in with their publication criteria. 

The International Journal of Mental Health and Addiction (Springer) featured in an earlier blogpost, on the basis of publishing a remarkably high number of articles authored by Griffiths. In that analysis I did not include letters. The journal gives no guidelines about the format or content of letters, and has published only 16 of them since 2019, but 10 of these are authored by Griffiths, mostly with Mamun. As noted in my prior blogpost, the journal provides no information about dates of submission and acceptance, so one cannot tell whether letters were peer-reviewed. The publisher told me last July and confirmed again in September that they are investigating the issues I raised in my last blogpost, but there has to date been no public report on the outcome.  

Psychiatry Research, published by Elsevier, is explicit that Case Reports can be included as letters, and provides formatting information (750-1000 words or less, up to 5 references, no tables or figures). Before 2019, around 2-4% of publications in the journal were letters, but this jumped to an astounding 22.4% in 2020, perhaps reflecting what has been termed 'covidization' of research.

The Asian Journal of Psychiatry (AsJP), also published by Elsevier, provides formatting information only (600-800 words, 10 references, 1 figure or table), and does not specify what would constitute the content of letters, but it publishes an extremely large number of them, as shown in Figure 1. This trend started before 2020, so cannot be attributed solely to COVID-mania. 

 

Figure 1: Categorization of documents published in Asian Journal of Psychiatry between 2015 and 2020.

For most journals, letters constitute a negligible proportion of their output, and so it is unlikely to have much impact whether or not they are peer reviewed. However, for a journal like AsJP, where letters outnumber regular articles, the peer review status of letters becomes of interest.  

The only information one can obtain on this point is indirect, by scanning the submission and acceptance dates of articles to see if the lag between these two is so short as to suggest there was no peer review. Relevant data for all letters published in AsJP and Psychiatry Research are shown in Table 1. It is apparent that there is a wide range of publication lags, some extending for weeks or months, but that lags of 7 days or less are not uncommon. There is no indication that Mamun/Griffiths have favourable treatment, but they benefit from the same rapid acceptance rate as other letters, with 40-50% chance of acceptance within two weeks.  

 

Table 1: Proportions of Letters categorized by publication lag in two journals publishing a large number of Letters. 

Thus an investigation into unusual publishing patterns by one pair of authors has drawn attention to at least two journals that appear to provide an easy way to accumulate a large number of publications that are not peer-reviewed but are nevertheless cited in Web of Science. If one adds a high rate of self-citation to the mix, and a willingness of editors to accept recycled newspaper reports as case studies, one starts to understand how it is possible for an inexperienced undergraduate to build up a substantial number of citations. 

I have drawn this situation to the attention of the integrity officers at Springer and Elsevier, but my previous experience with the Matson publication ring does not instil me with confidence that publishers will take any steps to monitor or control such editorial practices.  

Clarivate Analytics recently published a report on Research Integrity, in which they note how "all manner of interventions and manipulations are nowadays directed to the goal of attaining scores and a patina of prestige, for individual or journal, although it may be a thin coat hiding a base metal." They described various methods used by researchers to game citations, but did not include the use of non-peer-reviewed categories of paper to gain numerous citations. They describe technological solutions to improve integrity, but I would argue they need to add to their armamentarium a focus on the peer review process. The lag between submission and acceptance is a far from perfect indicator but it can give a rough proxy for the likelihood that peer review was undertaken. Requiring journals to make this information available, and to include it in the record of Web of Science, would go some way to identifying at least one form of gaming. 

No doubt Prof Griffiths regards himself as a benign influence, helping an enthusiastic and energetic young person from a low-income country establish himself. He has an impressive number of collaborators from all over the world, many of whom have written in his support. Collaboration between those from very different cultures is generally to be welcomed. And I must stress that I have no objection to someone young and inexperienced making a scientific contribution - that is entirely in line with Mertonian norms of universalism. It is the departure from the norm of disinterestedness that concerns me. An obsession with 'publish or perish' leads to gaming of publications and citation counts as a way to get ahead. Research is treated like a game where the focus becomes one's own success rather than the advancement of science. This is a consequence of our distorted incentive structures and it has a damaging effect on scientific quality. It is frankly depressing to see such attitudes being inculcated in junior researchers from around the world.

 *Addendum

Letters co-authored by Mamun and Griffiths based on newspaper reports (from Web of Science). Self-cites refers to number of citations to articles by Griffiths and/or Mamun. Publication lags in square brackets calculated from date of submission and date of acceptance taken from the journal website. The script used to extract these dates, and .csv files with journal records, are available on: https://github.com/oscci/miscellaneous, see Faux Peer Review.rmd 

Mamun, MA; Griffiths, MD (2020) A rare case of Bangladeshi student suicide by gunshot due to unusual multiple causalities Asian Journal of Psychiatry, 49 10.1016/j.ajp.2020.101951 [5 self-cites, lag 6 days] 

Mamun, MA; Misti, JM; Griffiths, MD (2020) Suicide of Bangladeshi medical students: Risk factor trends based on Bangladeshi press reports Asian Journal of Psychiatry, 48 10.1016/j.ajp.2019.101905 [2 self-cites, lag 91 days] 

Mamun, MA; Chandrima, RM; Griffiths, MD (2020) Mother and Son Suicide Pact Due to COVID-19-Related Online Learning Issues in Bangladesh: An Unusual Case Report International Journal of Mental Health and Addiction, 10.1007/s11469-020-00362-5 [14 self-cites]

Bhuiyan, AKMI; Sakib, N; Pakpour, AH; Griffiths, MD; Mamun, MA (2020) COVID-19-Related Suicides in Bangladesh Due to Lockdown and Economic Factors: Case Study Evidence from Media Reports International Journal of Mental Health and Addiction, 10.1007/s11469-020-00307-y [10 self-cites]

Mamun, MA; Griffiths, MD (2020) Young Teenage Suicides in Bangladesh-Are Mandatory Junior School Certificate Exams to Blame? International Journal of Mental Health and Addiction, 10.1007/s11469-020-00275-3 [11 self-cites]

Mamun, MA; Siddique, A; Sikder, MT; Griffiths, MD (2020) Student Suicide Risk and Gender: A Retrospective Study from Bangladeshi Press Reports International Journal of Mental Health and Addiction, 10.1007/s11469-020-00267-3 [6 self-cites]

Griffiths, MD; Mamun, MA (2020) COVID-19 suicidal behavior among couples and suicide pacts: Case study evidence from press reports Psychiatry Research, 289 10.1016/j.psychres.2020.113105 [10 self-cites, lag161 days] 

Mamun, MA; Griffiths, MD (2020) PTSD-related suicide six years after the Rana Plaza collapse in Bangladesh Psychiatry Research, 287 10.1016/j.psychres.2019.112645 [0 self-cites, 2 days] 

 

Sunday, 16 August 2020

PEPIOPs – prolific editors who publish in their own publications

I recently reported on a method for identifying authors who contribute an unusually high proportion of papers to a specific journal. As I noted in my previous blogpost, one cannot assume any wrongdoing just on the grounds of a high publication rate, but when there is a close link between the author in question and the journal's editor, then this raises concerns about preferential treatment and integrity of the peer review process.

In running my analyses, I also found some cases where there wasn't just a close link between the most prolific author in a journal and the editor: they were one and the same person! At around the time I was unearthing these results, Elisabeth Bik tweeted to ask if anyone had examples of editors publishing in their own journals. I realised the analysis scripts I had developed for the 'percent by most prolific' analysis could be readily adapted to look at this question, and so I analysed journals in the broad domain of psychology and behavioural science from six publishers: Springer Nature, Wiley, Taylor and Francis, Sage, Elsevier and American Psychological Association (APA). I focused on those responsible for editorial decisions – typically termed Editor-in-Chief or Associate Editor. Sometimes this was hard to judge: in general, I included 'Deputy editors' with 'Editors-in-Chief' if they were very few in number. I ignored members of editorial boards.

Before reporting the findings, I thought I should consult more widely about whether people think it is appropriate for an editor to publish in their own journal.

As a starting point, I looked at guidelines that the Committee on Publication Ethics (COPE) has provided for editors  

Can editors publish in their own journal? 
While you should not be denied the ability to publish in your own journal, you must take extra precautions not to exploit your position or to create an impression of impropriety. Your journal must have a procedure for handling submissions from editors or members of the editorial board that will ensure that the peer review is handled independently of the author/editor. We also recommend that you describe the process in a commentary or similar note once the paper is published.

They link to a case report that notes how this issue can be particularly problematic if the journal is in a narrow field with few publication outlets, and the author is likely to be identifiable even if blinding is adopted.

I thought that it would be interesting to see what the broader academic community thought about this. In a Twitter poll I asked specifically about Editors-in-Chief:

The modal response was that it was OK for an Editor-in-Chief to publish in their own journal at a modest rate (once a year or less). There was general disapproval of editors publishing prolifically in their own journals – even though I emphasised that I was referring to situations where the editorial process was kept independent of the author, as recommended by COPE.

My poll question was not perfectly phrased - I did not specify the type of article: it is usual, for instance, for an editor to write editorials! But I later clarified that I meant regular articles and reviews, and I think most people interpreted it in that way.

The poll provoked some useful debate among those who approved and disapproved of editors publishing in their own journals.

Let's start with some of the reasons against this practice. I should lay my cards on the table and state that personally I am in agreement with Antonia Hamilton, who tweeted to say that as Editor in Chief of the Quarterly Journal of Experimental Psychology she would not be submitting papers from her lab to that journal during her term of office, citing concerns about Conflict of Interest. When I was Editor-in-Chief of Journal of Child Psychology and Psychiatry, I felt the same: I was concerned that my editorial colleagues would be put in a difficult position if they had to handle one of my papers, and if my papers were accepted, then it might be seen as involving a biased decision, even if it that was not the case. Chris Chambers, who was among the 34% who thought an Editor-in-Chief should never publish in their own journal, expressed concerns about breaches of confidentiality, given that the Editor-in-Chief would be able to access identities of reviewers. Others, though, questioned whether that was the case at all journals.

Turning to those who thought it was acceptable for an Editor-in-Chief to publish in their own journal, several people argued that it would be unfair, not just on the editor themselves, but also on members of their research group, if presence of an editorial co-author meant they could not submit to the journal. How much of an issue this is will depend, of course, on how big your research group is, and what other publication outlets are available.  Several people felt there was a big difference between cases where the editor was lead author, and those where they played a more minor role. The extreme case is when an editor is a member of a large research consortium led by someone else. It would seem unduly restrictive to debar a paper by a consortium of 100 researchers just because one of them was the Editor in Chief of the journal. It is worth remembering too that, while being a journal editor is a prestigious role, it is hard work, and publishers may be concerned that it becomes a seriously unattractive option if a major outlet for papers is suddenly off-limits.

Be this as it may, the impression from the Twitter poll was that three papers or more per annum starts to look excessive. In my analysis, shown here I found several journals where the Editor-in-Chief had coauthored 15 or more articles (excluding editorials/commentaries) in their own journal between 2015 and 2019.  I refer to these as PEPIOPs (prolific editors who publish in their own publications). For transparency, I've made my methods and results available, so that others can extend the analysis if they wish to do so: see https://github.com/oscci/Bibliometric_analyses. More legible versions of the tables are also available.

Table 1 shows the number of journals with a PEPIOP, according to publisher. 
Table 1: Number of journals with prolific authors. Columns AE and EIC indicate cases where Associate Editor or Editor-in-Chief published 15+ papers in the journal between 2015-2019
The table makes it clear that it is relatively rare for an Editor-in-Chief to be a PEPIOP: there were no cases for APA journals, and the highest number was 5.6% for Elsevier journals. Note that there are big differences between publishers in terms of how common it is for any author to publish 15+ papers in the journal over 5 years: this was true for around 25% of the Elsevier and Springer journals, and much less common for Sage, APA and Taylor & Francis. This probably  reflects the subject matter of the journals - prolific publication, often on multiauthored papers, is more common in biosciences than social sciences.

Individual PEPIOPs are shown in Table 2. An asterisk denotes and Editor-in-Chief who authored more articles in the journal than any other person between 2015-2019.

These numbers should be treated with caution – I did not check whether any of these editors had only recently adopted the editorial role - largely because this information is not easy to find from journal websites*. I looked at a small sample of papers from individual PEPIOPs to see if there was anything I had overlooked, but I haven't checked every article - as there a great many of them.  There was one case where this revealed misclassification: the editor of Journal of Paediatrics and Child Health wrote regular short commentaries of around 800 words summarising other papers: this seems an entirely unremarkable thing for an editor to do, but they were classified by Web of Science as 'articles' which led to him being categorised as a PEPIOP.  This illustrates how bibliometrics can be misleading.
Table 2: Editors-in-chief who published 15+ papers in own journal 2015-2019
In general I did not find any ready explanation for highly prolific outputs.  And I where I did spot checks I found only one case where there was an explanatory note of the kind COPE recommended (in the journal Maturitas) - on the contrary, in many cases there was a statement confirming no conflict of interest for any of the authors. Sometimes there were lists of conflicts relating to commercial aspects, but it was clear that authors did not regard their editorial role as posing a conflict. It was also worth mentioning there were cases where an Editor-in-Chief was senior author.

The more I have pondered this, the more I realise that the reason why I am concerned particularly by Editor-in-Chief PEPIOPs is because this is the person who is ultimately responsible for integrity of the publication process. Although publishers increasingly are alert to issues of research integrity, journal websites typically advise readers to contact the Editor-in-Chief if there is a problem. Complaints about editorial decisions, demands for retractions, or concerns about potential fraud or malpractice all come to the Editor-in-Chief. That's fine provided the Editor-in-Chief is a pillar of respectability. The problem is that not all editors are the paragons that we'd like them to be: one can adopt rather a jaundiced view after encountering editors who don't even bother to reply to expressions of concern, or adopt a secretive or dismissive attitude if plausible problems in their journal are flagged up. And there are also cases on record where an editor has abused the power of their position to enhance their own publication record. For this reason, I would strongly advise any Editor-in-Chief not to be a PEPIOP; it just looks bad, even if a robust, independent editorial process has been followed. We need to have confidence and trust in those who act as gatekeepers to journals.

My principal suggestion is that publishers could improve their record for integrity if they went beyond just endorsing COPE guidelines and started to actively check whether those guidelines are adhered to. 

*Note: 21st Aug 2020: A commenter on this blogpost has noted that Helai Huang fits this category - this editor has only been in post since 1 Jan 2020, so the period of prolific publication predated being an editor-in-chief.

Sunday, 12 July 2020

'Percent by most prolific' author score: a red flag for possible editorial bias

(This is an evolving story: scroll to end of post for updates; most recent update 19th Sept 2020)

This week has seen a strange tale unfold around the publication practices of Professor Mark Griffiths of Nottingham Trent University. Professor Griffiths is an expert in the field of behavioural addictions, including gambling and problematic internet use. He publishes prolifically, and in 2019 published 90 papers, meeting the criterion set by Ioannidis et al (2018) for a hyperprolific author.

More recently, he has published on behavioural aspects of reactions to the COVID-19 pandemic, and he is due to edit a special issue of the International Journal of Mental Health and Addiction (IJMHA) on this topic.

He came to my attention after Dr Brittany Davidson described her attempt to obtain data from a recent study published in IJMHA reporting a scale for measuring fear of COVID-19. She outlined the sequence of events on PubPeer.  Essentially Griffiths, as senior author, declined to share the data, despite there being a statement in the paper that the data would be available on request. This was unexpected, given that in a recent paper about gaming disorder research, Griffiths had written:
'Researchers should be encouraged to implement data-sharing procedures and transparency of research procedures by pre-registering their upcoming studies on established platforms such as the Open Science Framework (https://osf.io). Although this may not be entirely sufficient to tackle potential replicability issues, it will likely increase the robustness and transparency of future research.'
It is not uncommon for authors to be reluctant to share data if they have plans to do more work on a dataset, but one would expect the journal editor to take seriously a breach of a statement in the published paper. Dr Davidson reported that she did not receive a reply from Masood Zangeneh, the editor of IJMHA.

This lack of editorial response is concerning, especially given that the IJMHA is a member of the Committee on Publication Ethics (COPE) and Prof Griffiths is an Advisory Editor for the journal. When I looked further, I found that in the last five years, out of 644 articles and reviews published in the journal, 80 (12.42%) have been co-authored by Griffiths. Furthermore, he was co-author on 51 of 384 (13.28%) of articles in the Journal of Behavioral Addictions (JBA). He is also on the editorial board of JBA, which is edited by Zsolt Demetrovics, who has coauthored many papers with Griffiths.

This pattern may have an entirely innocent explanation, but public confidence in the journals may be dented by such skew in authorship, given that editors have considerable power to give an easy ride to papers by their friends and collaborators. In the past, I found a high rate of publication by favoured authors in certain journals was an indication of gaming by editors, detectable by the fact that papers by favoured authors had acceptance times far too short to be compatible with peer review. Neither IJMHA nor JBA publishes the dates of submission and acceptance of articles, and so it is not possible to evaluate this concern.

We can however ask, how unusual is it for a single author to dominate the profile of publications in a journal? To check this out, I did an analysis as follows:

1. I first identified a set of relevant journals in this field of research, by identifying papers that cited Griffiths' work. I selected journals that featured at least 10 times on that list. There were 99 of these journals, after excluding two big generalist journals (PLOS One and Scientific Reports) and one that was not represented on Web of Science.

2. Using the R package, wosr, I searched on Web of Science for all articles and reviews published in each journal between 2015 and 2020.

This gave results equivalent to a manual search such as: PUBLICATION NAME: (journal of behavioral addictions) AND DOCUMENT TYPES: (Article OR Review) Timespan: 2015-2020. Indexes: SCI-EXPANDED, SSCI, A&HCI, CPCI-S, CPCI-SSH, BKCI-S, BKCI-SSH, ESCI, CCR-EXPANDED, IC.

3. Next I identified the most prolific author for each journal, defined as the author with the highest number of publications in each journal for the years 2015-2020.

4. It was then easy to compute the percentage of papers in the journal that included the most prolific author. The same information can readily be obtained by a manual search on Web of Science by selecting Analyse Results and then Authors – this generates a treemap as in Figure 1.
Figure 1: Screenshot of 'Analyse Results' from Web of Science

A density plot of the distribution of these 'percent by most prolific' scores is shown in Figure 2, and reveals a bimodal distribution with a small hump at the right end corresponding to journals where 8% or more articles are contributed by a single prolific author. This hump included IJMHA and JBA.

Figure 2: Distribution of % papers by most prolific author for 99 journals

This exercise confirmed my impression that these two journals are outliers in having such a high proportion of papers contributed by one author – in this case Griffiths - as shown in Figure 3. It is noteworthy that a few journals have authors who contributed a remarkably high number of papers, but these tended to be journals with very large numbers of papers (on the right hand side of Figure 3), and so the proportion is less striking. The table corresponding to Figure 3, and the script used to generate the summary data, are available here.

Figure 3: Each point corresponds to one journal: scatterplot shows the N papers and percentage of papers contributed by the most prolific author in that journal

I then repeated this same procedure for the journals involved in bad editorial practices that I featured in earlier blogposts. As shown in Table 1, this 'percent by most prolific' score was also unusually high for those journals during the period when I identified overly brief editorial decision times, but has subsequently recovered to something more normal under new editors. (Regrettably, the publishers have taken no action on the unreviewed papers in these journals, which continue to pollute the literature in this field.)

JournalYear range N articlesMost prolific author% by prolific
Research in Developmental Disabilities2015-2019972Steenbergen B1.34

2010-20141665Sigafoos J3.78

2005-2009337Matson JL9.2

2000-2004173Matson JL8.09
Research in Autism Spectrum Disorders2015-2019448Gal E1.34

2010-2014777Matson JL10.94

2005-2009182Matson JL15.93
J Developmental and Physical Disabilities2015-2019279Bitsika V4.3

2010-2014226Matson JL10.62

2005-2009187Matson JL9.63

2000-2004126Ryan B3.17
Developmental NeuroRehabilitation2015-2019327Falkmer T3.98

2010-2014252Matson JL13.89

2005-200973Haley SM5.48


Table 1: Analysis of 'percentage by most prolific' publications in four journals with evidence of editorial bias. Those with '% most prolific' scores > 8 are shown in pink.

Could the 'percent by most prolific' score be an indicator of editorial bias? This cannot be assumed: it could be the case that Griffiths produces an enormous amount of high quality work, and chooses to place it in one of two journals that have a relevant readership. Nevertheless, this publishing profile, with one author accounting for more than 10% of the papers in two separate journals, is unusual enough to raise a red flag that the usual peer review process might have been subverted. That flag could easily be lowered if we had information on dates of submission and acceptance of papers, or, better still, open peer review.

I will be writing to Springer, the publisher of IJMHA, and AK Journals, the publisher of JBA, to recommend that they investigate the unusual publication patterns in their journals, and to ask that in future they explicitly report dates of submission and acceptance of papers, as well as the identity of the editor who was responsible for the peer review process. A move to open peer review is a much bigger step adopted by some journals that has been important in giving confidence that ethical publishing practices are followed. Such transparent practices are important not just for detecting problems, but also for ensuring that question marks do not hang unfairly over the heads of authors and editors.

**Update** 20th July 2020.
AK Journals have responded with a very prompt and detailed account of an investigation that they have conducted into the publications in their journal, which finds no evidence of any preferential treatment of papers by Prof Griffiths. See their comment below.  Note also that, contrary to my statement above, dates of receipt/decision for papers in JBA are made public: I could not find them online but they are included in the pdf version of papers published in the journal.

**Update2** 21st July 2020
Professor Griffiths has written two blogposts responding to concerns about his numerous publications in JBA and IJMHA.
In the first, he confirms that the papers in both journals were properly peer-reviewed (as AK journals have stated in their response), and in the second, he makes the case that he met criteria for authorship in all papers, citing endorsements from co-authors.   
I will post here any response I get from IJMHA.  

**Update3** 19th Sept 2020

Springer publishers confirmed in July that they would be conducting an investigation into the issues with IJMHA but subsequent queries have not provided any information other than that the investigation is continuing. 

Meanwhile, it is good to see that the data from the original paper that sparked off this blogpost have now been deposited on OSF, and a correction regarding the results of that study has now also appeared in the journal.





Saturday, 11 June 2016

Editorial integrity: Publishers on the front line



Thanks to some live tweeting by Anna Sharman (@sharmanedit), I've become aware that the 13th Conference of the European Association of Science Editors (EASE) is taking place in Strasbourg this weekend.
The topic is "Scientific integrity: editors on the front line", and the programme acknowledges Elsevier, who presumably have contributed funding for the conference.
It therefore seems timely to give a brief update of developments following three blogposts I wrote during February-March 2015, documenting some peculiar editorial behaviour at four journals: Research in Autism Spectrum Disorders (RASD: Elsevier), Research in Developmental Disabilities (RIDD: Elsevier), Developmental Neurorehabilitation (DN: Informa Healthcare) and Journal of Developmental and Physical Disabilities (JDPD: Springer).
To do the story full justice, you need to read these blogposts, but in brief, blogpost 1 described how Johnny Matson, the then editor of both RASD and RIDD had published numerous articles in his own journal, and engaged in frequent self-citation, leading to his receiving a 'highly cited' badge from Thomson Reuters. In the comments on that blogpost, another intriguing factor emerged, which was Matson's tendency to accept papers with little or no review. This was denied by Elsevier, despite clear evidence of very short acceptance lags that were incompatible with review.
Blogpost 2 was prompted by Matson defending himself against accusations of self-citation by pointing out that he published in journals that he did not edit. I checked this out and found he had numerous papers in two other journals: DN and JDPD, and that the median lag between a paper of his being submitted and accepted in DN was one day. (JDPD does not provide data on publication lags). I therefore looked at the editors of those journals, and found that they themselves were publishing remarkable numbers of papers in RASD and RIDD, again with extremely short publication lags. A trio of editors and editorial board members (Jeff Sigafoos, Giulio Lancioni and Mark O'Reilly), co-authored no less than 140 papers in RASD and RIDD between 2010 and 2014, typically with acceptance times of less than 2 weeks. Some of the papers in RIDD were not even in the topic area of developmental disabilities, but covered neurological conditions acquired in adulthood.
In blogpost 3, I turned the focus on to the publisher of RASD and RIDD, Elsevier, to query why they had not done anything about such irregular editorial practices. I did a further analysis of publication lags in RIDD, showing that they had dropped precipitately between 2008 and 2012, and that there was a small band of authors whose prolific papers were published there at amazing speed. I provided all the statistical data to support my case, including interactive spreadsheets that made it easy to determine which editors and authors had been benefiting from the slack editorial standards at these journals.
There was some interesting fall-out from all of this. The second blogpost drew fire from supporters of the editors I had "outed", accusing me of bad behaviour and threatening to complain to my university. Since everything I had said was backed by evidence, this did not concern me. I received heartfelt messages of support from people who were appalled that a particular approach to autism intervention had been promoted by this group of editors, who were in effect using their status to gain the veneer of scientific credibility for work which was not in fact peer-reviewed.  I was also contacted by several academics telling me that everyone knew this had been going on for years, but nobody had done anything; this level of passivity was surprising given that many were angry that  authors had reaped benefits from their staggeringly high publication rate, while those who were outside the charmed circle were left behind. I was urged to go further and raise my concerns with the universities employing those who were capitalising on, or engaging in, lax editorial behaviour. I do, however, have an extremely demanding job and I hoped that I had done enough by shining a light on dubious practices, and providing the full datasets that provided evidence. However, I now wonder if I should have been more pro-active.
I wrote to express my concerns to publishers of all four journals, and had my correspondence acknowledged. But then? Well, not a lot.
It's clear that Elsevier has taken some action. Indeed, my first blogpost was prompted by Michelle Dawson noting on Twitter that the editorial boards of RASD and RIDD had mysteriously disappeared from the online journals. She had previously noted Matson's pattern of mega-self-citation, and I had written directly to him, with copy to the publisher, some months previously to express concern, when I realised that I was listed as a member of the editorial board of RASD. Elsevier did not acknowledge my letter, but it is possible that the changes to the editorial boards that they had started were linked to my concerns.
The first direct response I had from Elsevier was some weeks after my final blogpost, when they explained that they were looking into the situation regarding unreviewed papers, but that this was a huge job and would take a long time. They were presumably disinclined to rely on the files that I had deposited on Open Science Framework, which show the identity and submission and acceptance data for every paper in RASD and RIDD.  They did appoint new editors and a small group of associate editors for both journals, all with good track records for integrity.
I have heard on the grapevine that they are now evaluating articles published in those journals that have been identified as not having undergone peer review; some of those approached to do these evaluations have mentioned this to me. It's rather unclear how this is going to work, given that, across the two journals, there are nearly 1000 papers where the available data indicate a lag from receipt to acceptance of under 2 weeks. I guess we should be glad that at least the publisher is taking some action, albeit at a snail's pace, but I am dubious as to whether there will be any retractions.
Meanwhile, Developmental Neurorehabilitation changed publisher around the time I was writing, and is now under the care of Taylor and Francis. I wrote to the publisher explaining my concerns and received a polite reply, but then heard no more. I note that the Editor in Chief is now Wendy Machalicek, who previously co-edited the journal with Russell Lang. Lang's doctoral advisor was Mark O'Reilly, editor of JDPD, and one of the prolific trio who featured in blogpost 2. Lang himself co-authored 24 papers in RASD and 13 in RIDD, and 35 of these 39 papers were accepted within 2 weeks of receipt. Machalicek has published 11 papers in RASD and 5 in RIDD, and 12 of these 16 papers were accepted within 2 weeks of receipt.  She also did her doctorate in O'Reilly's department, and several of her papers are co-authored with him. In an editorial last year, Lang and Machalicek announced changes to the journal, some of which seem to be prompted by a desire to make the reviewing process more rigorous under the new publisher. However, one change is of particular interest: the scope of the journal will be broadened to consider "developmental disability from a lifespan perspective; wherein, it is acknowledged that development occurs throughout a person's life and a range of injuries, diseases and other impairments can cause delayed or abnormal development at any stage of life." That will be good news for Giulio Lancioni, who was previously publishing papers on coma patients, amyotrophic lateral sclerosis, and Alzheimer's disease in RIDD. He and his collaborators – Jeff Sigafoos, Mark O'Reilly, as well as Russell Lang and Johnny Matson – are all current members of the editorial board of the journal.
It seems to be business as usual at the Springer title, Journal of Developmental and Physical Disabilities. Mark O'Reilly is still the editor, with Lang and Sigafoos as associate editors; Lancioni, Machalicek and Matson are all on the editorial board. Springer's willingness to turn a blind eye to editors playing the system becomes clear when one sees that a recent title, "Review Journal of Autism and Developmental Disorders" has as Editor-in-Chief no less a personage than Johnny Matson. And, surprise, surprise, the editorial board includes Lang, Sigafoos and Lancioni.
One of the overarching problems I uncovered when navigating my way around this situation was that there is no effective route for a whistleblower who has uncovered evidence of dubious behaviour by editors. Elsevier has developed a Publishing Ethics Resource Kit  but it is designed to help editors dealing with ethical issues that arise with authors and reviewers. The general advice if you encounter an ethical problem is to contact the editor. The Committee on Publication Ethics also issues guidance, but it is an advisory body with no powers. One would hope that publishers would act with integrity when a serious problem with an editor is revealed, but if my experience is anything to go by, they are extremely reluctant to act and will weave very large carpets to brush the problems under.