Monday, 4 September 2023

Polyunsaturated fatty acids and children's cognition: p-hacking and the canonisation of false facts

One of my favourite articles is a piece by Nissen et al (2016) called "Publication bias and the canonization of false facts". In it, the authors model how false information can masquerade as overwhelming evidence, if, over cycles of experimentation, positive results are more likely to be published than null ones. But their article is not just about publication bias: they go on to show how p-hacking magnifies this effect, because it leads to a false positive rate that is much higher than the nominal rate (typically .05).

I was reminded of this when looking at some literature on polyunsaturated fatty acids and children's cognition. This was a topic I'd had a passing interest in years ago when fish oil was being promoted for children with dyslexia and ADHD. I reviewed the literature back in 2008 for a talk at the British Dyslexia Association (slides here). What was striking then was that, whilst there were studies claiming positive effects of dietary supplements, they all obtained different findings. It looked suspicious to me, as if authors would keep looking in their data, and divide it up every way possible, in order to find something positive to report – in other words, p-hacking seemed rife in this field.

My interest in this area was piqued more recently simply because I was looking at articles that had been flagged up because they contained "tortured phrases". These are verbal expressions that seem to have been selected to avoid plagiarism detectors: they are often unintentionally humorous, because attempts to generate synonyms misfire. For instance, in this article by Khalid et al, published in Taylor and Francis' International Journal of Food Properties we are told: 

"Parkinson’s infection is a typical neurodegenerative sickness. The mix of hereditary and natural variables might be significant in delivering unusual protein inside explicit neuronal gatherings, prompting cell brokenness and later demise" 

And, regarding autism: 

"Chemical imbalance range problem is a term used to portray various beginning stage social correspondence issues and tedious sensorimotor practices identified with a solid hereditary part and different reasons."

The paper was interesting, though, for another reason. It contained a table summarising results from ten randomized controlled trials of polyunsaturated fatty acid supplementation in pregnant women and young children. This was not a systematic review, and it was unclear how the studies had been selected. As I documented on PubPeer,  there were errors in the descriptions of some of the studies, and the interpretation was superficial. But as I checked over the studies, I was also struck by the fact that all studies concluded with a claim of a positive finding, even when the planned analyses gave null results. But, as with the studies I'd looked at in 2008, no two studies found the same thing. All the indicators were that this field is characterised by a mixture of p-hacking and hype, which creates the impression that the benefits of dietary supplementation are well-established, when a more dispassionate look at the evidence suggests considerable scepticism is warranted.

There were three questionable research practices that were prominent. First, testing a large number of 'primary research outcomes' without any correction for multiple comparisons. Three of the papers cited by Khalid did this, and they are marked in Table 1 below with "hmm" in the main analysis column. Two of them argued against using a method such as Bonferroni correction:

"Owing to the exploratory nature of this study, we did not wish to exclude any important relationships by using stringent correction factors for multiple analyses, and we recognised the potential for a type 1 error." (Dunstan et al, 2008)

"Although multiple comparisons are inevitable in studies of this nature, the statistical corrections that are often employed to address this (e.g. Bonferroni correction) infer that multiple relationships (even if consistent and significant) detract from each other, and deal with this by adjustments that abolish any findings without extremely significant levels (P values). However, it has been validly argued that where there are consistent, repeated, coherent and biologically plausible patterns, the results ‘reinforce’ rather than detract from each other (even if P values are significant but not very large)" (Meldrum et al, 2012)
While it is correct that Bonferroni correction is overconservative with correlated outcome measures, there are other methods for protecting the analysis from inflated type I error that should be applied in such cases (Bishop, 2023).

The second practice is conducting subgroup analyses: the initial analysis finds nothing, so a way is found to divide up the sample to find a subgroup that does show the effect. There is a nice paper by Peto that explains the dangers of doing this. The third practice, looking for correlations between variables rather than main effects of intervention: with sufficient variables, it is always possible to find something 'significant' if you don't employ any correction for multiple comparisons. This inflation of false positives by correlational analysis is a well-recognised problem in the field of neuroscience (e.g. Vul et al., 2008).

Given that such practices were normative in my own field of psychology for many years, I suspect that those who adopt them here are unaware of how serious a risk they run of finding spurious positive results. For instance, if you compare two groups on ten unrelated outcome measures, then the probability that something will give you a 'significant' p-value below .05 is not 5% but 40%. (The probability that none of the 10 results is significant is .95^10, which is .6. So the probability that at least one is below .05 is 1-.6 = .4). Dividing a sample into subgroups in the hope of finding something 'significant' is another way to multiply the rate of false positive findings. 

In many fields, p-hacking is virtually impossible to detect because authors will selectively report their 'significant' findings, so the true false positive rate can't be estimated. In randomised controlled trials, the situation is a bit better, provided the study has been registered on a trial registry – this is now standard practice, precisely because it's recognised as an important way to avoid, or at least increase detection of, analytic flexibility and outcome switching. Accordingly, I catalogued, for the 10 studies reviewed by Khalid et al, how many found a significant effect of intervention on their planned, primary outcome measure, and how many focused on other results. The results are depressing. Flexible analyses are universal. Some authors emphasised the provisional nature of findings from exploratory analyses, but many did not. And my suspicion is that, even if the authors add a word of caution, those citing the work will ignore it.  


Table 1: Reporting outcomes for 10 studies cited by Khalid et al (2022)

Khalid # Register N Main result* Subgrp Correlatn Abs -ve Abs +ve
41 yes 86 NS yes no no yes
42 no 72 hmm no no no yes
43 no 420 hmm no no yes yes
44 yes 90 NS no yes yes yes
45 no 90 yes no yes NA
yes
46 yes 150 hmm no no yes yes
47 yes 175 NS no yes yes yes
48 no 107 NS yes no yes yes
49 yes 1094 NS yes no yes yes
50 no 27 yes no no yes yes

Key: Main result coded as NS (nonsignificant), yes (significant) or hmm (not significant if Bonferroni corrected); Subgrp and Correlatn coded yes or no depending on whether post hoc subgroup or correlational analyses conducted. Abs -ve coded yes if negative results reported in abstract, no if not, and NA if no negative results obtained. Abs +ve coded yes if positive results mentioned in abstract.

I don't know if the Khalid et al review will have any effect – it is so evidently flawed that I hope it will be retracted. But the problems it reveals are not just a feature of the odd rogue review: there is a systemic problem with this area of science, whereby the desire to find positive results, coupled with questionable research practices and publication bias, have led to the construction of a huge edifice of evidence based on extremely shaky foundations. The resulting waste in researcher time and funding that comes from pursuing phantom findings is a scandal that can only be addressed by researchers prioritising rigour, honesty and scholarship over fast and flashy science.

Monday, 14 August 2023

The Discussion section: kill it or reform it?

 

I’m impelled to write a short piece about Discussion sections, after a bit of to and fro on Twitter, which started with @SchoeneggerPhil tweeting about a new paper by Schoenegger and Pils

 @SchoeneggerPhil: New paper out with @PilsRaimund! We propose a new solution to the crises facing the social sciences: removing the discussion sections. We argue that they harm honest scientific reporting and would provide epistemic benefits if outsourced from the standard article.

Michael C Frank then remarked: 

@mcxfrank: Hot take: no one reads discussions so removing them will improve efficiency but not solve interpretive crises (which are driven by titles and abstracts).

Well, as someone who enjoys reading and writing discussion sections, I found this all very depressing. I argued that we already have a solution to the problems Schoenegger and Pils were trying to fix – Registered Reports. In addition, I said:

@deevybee: It seems you are so worried about people HARKing and otherwise misrepresenting data, that you end up preventing exploratory research and speculation. I see them as crucial for science; it's just a case of clarifying what is hypothesis-testing and what is not.

Russ Poldrack started further discussion by weighing in on the side of Schoenegger/Pils: 

@russpoldrack: I see this as a separate issue from the utility of the discussion section, which is what I liked about the original post. I agree that we need exploration - but I'd rather read speculation in the intro (as motivation for new work) rather than in the discussion (as ad hockery)

@deevybee: not sure i get this. Are you saying introduction should anticipate results before they’ve been reported; or that nobody should ever report a new idea that was stimulated by having seen the results?
@russpoldrack: what I mean is that if you have a new idea stimulated by your results, you should go do some additional work to test the idea, and then write a paper about that. if you want to speculate in your discussion that's fine, I just don't usually feel like it's worth my time to read it

For those who don’t know me, I should start by explaining that I am fully convinced of the problems with how the research literature currently works. I have various talks and slide-decks online on this topic (see here) as well as a long article on cognitive biases. So I agree with the problem that Schoenegger and Pils want to fix: the Discussion section of a paper is often the provenance of over-hyped findings that are damaging to science as a cumulative process, because they encourage people to waste time following seductive but ultimately false leads. But it’s a bit of a leap from saying that many people misuse the Discussion section to concluding it should be banned, particularly when we do have other solutions to the problems.

What troubles me about the views expressed by Phil and Russ is that it sounds as if they are opposed to anyone reporting a new idea that emerges from consideration of the data. I spend a lot of my time arguing for solutions to the reproducibility crisis, and am familiar with the push-back from those that say “You’ll kill creativity”, and “You are forbidding exploratory research”. My response has always been to say I thoroughly approve of people reporting creative insights that come from observing data. The only thing you should not do is to formulate and test a hypothesis from the same data - The Registered Reports model, of which I am a fan, deals very nicely with what I’ve called the four horsemen of the reproducibility apocalypse  – p-hacking, HARKing, low power and publication bias – but it does not preclude the researcher coming up with novel ideas in the Discussion: instead, it draws a very clear boundary between what is hypothesis-testing and what is exploratory, and does not allow someone to include hypothesis-testing analyses that were not preregistered.

I found Russ’s comments depressing, because it implies you shouldn’t report a new idea without first doing further work to test it. If you're making a bold new claim, then of course you need to do that. But I see scientific progress as incremental, and some insights could be valuable for others working on the topic to take into consideration. If we could only report on ideas that had been shored up by more experimentation, it could slow down discovery, because people just wouldn't bother. Also, it would make research a sadly solitary activity, where instead of exchanging ideas, we all plod on in our own narrow furrow. 

I should come clean and explain that I am currently finalising the write-up of an analysis of a dataset focused on language lateralisation, where I am comparing methods of deriving a laterality index in a purely exploratory fashion. This has generated new ideas about factors that may drive individual differences in laterality – something that has intrigued me for 50 years. I think the appropriate way to handle this in the Discussion is to end with specific predictions that follow from my ideas. Maybe others would be interested in reanalysing existing data, or doing new studies that build on this work, maybe not. I am aiming to do more myself with existing datasets, but not in a position to gather new data. I am explicit in the write-up that my current study is exploratory and I do not present statistical tests. But it seems to me it would be a bit perverse if I didn’t mention how my analyses had changed my thinking to generate new predictions. If it’s true that nobody reads discussion sections, then I’ll have cast my seed on barren ground, but that still feels better than doing nothing with it - and it's useful for me to have a clear account that can be the basis for subsequent preregistered analyses. 

If we adopted the Schoenegger and Pils model, I’d just have to hope that (a) someone else would be interested enough to write a Discussion paper based on my results, and (b) they would have useful insights into what it all means. I have all the cognitive biases of a typical human, so of course I think my own ideas are likely to be better than those of others (despite years of negative feedback). But the underlying nature of laterality is a topic that has intrigued me since I started in neuropsychology, so I don’t think it’s unreasonable for me to think that my insights not be obvious to others and may also have some value. As Schoenegger and Pils noted: 

“… one central cost with our proposal is that we may lose the epistemic advantages of authors discussing their own data in some instances. Particularly, the authors of a study often have unique insights into their data that may not be immediately apparent to third-party researchers. This is especially true for studies that involve complex datasets. By outsourcing the discussion section to third-party authors, we may miss out on important nuances and insights that only the original authors can provide.”

In sum, my view is that, when used properly, the Discussion section serves two purposes. It communicates succinctly the import of the reported results in relation to a priori hypotheses, and it provides an opportunity to consider new ideas stimulated by the results. In practice, the Discussion is often misused. People play down results they don’t like, over-interpret those that accord with their preferred theory, and engage in HARKing. But to say you should get rid of the Discussion because it is misused is like saying we should all give up cars because some people drive too fast and cause accidents.

 

Note re comments. To avoid spam, comments are moderated. In general they will be approved if on-topic and non-abusive, but approval may not be immediate.

Sunday, 23 July 2023

Is Hindawi “well-positioned for revitalization?”


Guest post by Huanzi Zhang*

 Screenshot from https://www.wiley.com/en-us/network/publishing/research-publishing/open-access/wiley-acquires-hindawi-qa-with-liz-ferguson
 

Over the past year, special issues of dozens of Hindawi journals have been exposed as being systematically manipulated, resulting in the delisting of more than 20 Hindawi journals from major journal databases, as well as the retraction of more than 2,700 papers by the publisher. This "unexpected event" at Hindawi also led to a slump in profits for the parent company, John Wiley & Sons. However, in a recent statement, the president, CEO & director of Wiley, Brian Napack, stated that Hindawi was now ready for revitalization and reinstatement of the special issue program. In my opinion, Wiley has not dealt adequately with the integrity issues that led to the problem, but appears focused on growth  through the medium of special issues. This raises questions as to whether Hindawi’s operation is sustainable in the long term. 

Napack’s statement can be read here. He stated:

 “Fiscal '24 (starts on May 1, 2023) will be a year of revitalization for Hindawi with positive signs already emerging. We've now named a new leader of Hindawi, a talented Wiley veteran with deep expertise in the area. We've restarted the special issues program and we will be ramping it up throughout the year. We're working through the large article backlog, and we are executing our journal growth plans.”

One might, however, be forgiven for being a bit sceptical about this upbeat message, since just a year before, Napack said:

 “Hindawi performed at a very high level this year, delivering strong double-digit organic revenue growth and 36% article output growth on a pro forma basis, and has achieved this with exceptional margins. We have now completed the integration and we are benefiting significantly from Hindawi's industry leading open publishing practices and its highly efficient systems.” 

So what is the problem with Hindawi journals that suddenly got Wiley into trouble? Is Hindawi really well-positioned for revitalization? 

Wiley’s acquisition of Hindawi

Wiley announced the acquisition of Hindawi on January 5, 2021. The person who pushed Wiley to buy Hindawi was Judy Verses, Wiley's Executive Vice President, who left Wiley to join Elsevier a few months after the acquisition was completed. On January 11, 2021, in an interview with Verses and Wiley's Senior Vice President Liz Ferguson, we learned that Wiley expected that Hindawi journals would publish many articles by Chinese authors, expanding their market in China:

 “It has surpassed the US in recent years, and in an increasing number of disciplines is undoubtedly the global leader. Hindawi had the foresight to launch and develop journals that reflect strengths in the China research space. Similar to Wiley, Hindawi identified early on how important it was to serve the needs of China-based researchers. Bringing together our two teams now means we have an even stronger position to be able to work to the needs of those researchers.”

The Publishing Perspective's interview with Jay Flynn, Senior Vice President and Chief Product Offer of Wiley Research, published on January 5, 2021, and Wiley's internal interview with Ferguson, published on February 23, 2021, mentioned the importance of China. The new niche market for Hindawi journals seemed to respond soundly to this appointment, and the number of submissions began to increase significantly (Figure 1), while no such increase had been seen in the preceding months.  In September 2021, Jay Flynn was promoted to Executive Vice President of Wiley Research to replace Judy Verses.

 

Figure 1. Number of submissions received per month for 24 journals that subsequently had papers retracted by Hindawi. Data from Hindawi Journal Report.

The papermill problem 

The first concerns were raised by research integrity communities and individuals, who since 2021 posted thousands of comments on the PubPeer journal club relating to papers published in hundreds of special issues of Hindawi journals. Many comments were made by anonymous sleuths Rhipidura albiventris, Hoya camphorifolia and Parashorea tomentella, whose contributions sometimes exceeded 100 per day.  Problems were not limited to individual special issues or even individual journals. There appeared to be systematic manipulation of the publishing process, especially affecting special issues, indicating activities of so-called “paper mills” – fraudulent organisations that will sell authorship and/or citations of papers, often faked, for a fee. Leonid Schneider, who runs the blog For Better Science, assisted David Bimler and others in posting their findings from Hindawi journals. Nick Wise discussed “What is going on in Hindawi special issues?” on October 12, 2022. These sleuths noted that many supposedly ‘peer-reviewed’ manuscripts had incoherent or unintelligible content, and corresponding authors used email addresses from other institutions; furthermore, a large number of papers cited irrelevant references, presumably to boost citation counts; some paper mills used the article processing charge (APC) waiver policy of Hindawi to make more profit. The pattern of abnormal citations confirmed that the fraud was not bounded by journals, and so publication-based investigations made it difficult to expose specific paper mills. 

It soon became clear that the special issues that were such a lucrative source of income for Hindawi were wide open to corrupt "guest editors" who, once appointed, could use fake peer review and flood the journal with fraudulent papers and irrelevant citations. In the gold open access model, Wiley earns APCs for every article published in Hindawi journals, whether in a special issue or not, so their incentive to exert quality control is compromised. Some examples are so extreme that nobody could take them seriously, such as an article on the ideological and political education of the Chinese Communist Party in a special issue "Exploration of Human Cognition using Artificial Intelligence in Healthcare", which was submitted, peer-reviewed and received on the same day. There are thousands of other papers which may be genuine but whose subject falls well outside the scope of the special issue where they are published, indicating that the journal is out of editorial control. 

Many of the authors and guest editors of the problematic papers mentioned by Bimler came from Asia. Ruihang Huang, Chunjiong Zhang, and Hanlie Cheng, PhD students at Donghua University, Tongji University and China University of Geosciences, Beijing, respectively, were beneficiaries of the citation manipulation and participants in the manipulation of special issues. Another example is Kaifa Zhao, who approved many nonsense manuscripts for publication as a guest editor at two journals: Computational Intelligence and Neuroscience and Journal of Environmental and Public Health. TigerBB8 identified Zhao as a PhD student at the Hong Kong Polytechnic University, and Dorothy Bishop requested an investigation by the university. As reported in Retraction Watch, their report claimed that Zhao's identity had been stolen by Yizhang Jiang, Zhao's master's program advisor. 

"According to Mr Zhao, he was not aware of relevant emails from Hindawi and has never responded to emails that are related to the two special issues"

 Hindawi takes action (slowly)

After a year of rapid growth in the Chinese market (Figure 2), Wiley acted. On September 28, 2022, Ferguson announced that 511 papers would be retracted from Hindawi journals. Intrestingly, no mention was made of the comments on PubPeer and Bimler's blog post; instead it was stated that these retractions were based on the findings of the Hindawi Research Integrity Team. The first retractions were seen in mid-November with concentrated releases during the Lunar New Year.

 

Figure 2. Number of articles and reviews published in 14 Hindawi journals 2019-2022. Data from Scopus.

It is possible that mass retractions were delayed because Wiley did not want to disrupt their agenda at the 74th Frankfurter Buchmesse (October 19 to 23, 2022). There was no indication that Wiley shared Hindawi's problems at the book fair. Instead, they were busy with other things. Flynn announced the creation of Wiley Partners Solution to meet the "scholarly publishing needs at scale" on October 17. Ferguson participated in a forum entitled "How the Article-Based Economy is Transforming Research Publishing" on October 19. Intriguingly, an essay posted by Chemistry World on April 24, 2023, citing Flynn, noted that Wiley had convened a meeting of publishers at that book fair, invited Clarivate, the owner of major journal database Web of Science (WoS) Core Collection, and disclosed to them the problems with Hindawi journals. We do not know the outcome of this meeting, but change did occur. On the one hand, Hindawi began issuing retractions on November 16, 2022. Nevertheless, in October 2022, special issues of Hindawi journals were still being published with many compromised articles, though from December 2022 onwards, the number of papers published in special issues decreased.  

Delisting of Hindawi journals

The public information prompted journal databases to re-evaluate whether Hindawi journals should continue to be indexed. In February 2023, DOAJ (Directory of Open Access Journals) delisted thirteen Hindawi journals. Then Scopus discontinued the indexing of six Hindawi journals. On March 20, 2023, Clarivate delisted nineteen Hindawi journals from WoS Core Collection. The fact that Education Research International was delisted suggested Clarivate conducted an independent investigation, as this journal had not been mentioned in relevant sources.  

As shown in Figure 3, the actions of the publisher and journal databases did not always involve the same journals. 


Figure 3. Twenty-six problematic Hindawi journals. WoS Core Collection: The journal was delisted from WoS Core Collection in March 2023. DOAJ: The journal was delisted from DOAJ in February 2023. Scopus: The journal was delisted from Scopus in the first half of 2023. Ferguson 500+: Papers in the journal were retracted and Liz Ferguson's statement was cited in the retraction statement. Flynn 2200+: Papers in the journal were retracted by Wiley using similar statements after May 2023. 

Clarivate did not publish the reasons for the delisting of each journal, nor did they delist more Hindawi journals before the release of 2023 Journal Citation Reports on June 28, 2023. Compared to MDPI, whose mega-journal the International Journal of Environmental Research and Public Health was delisted, Wiley's public response was subdued. In a mildly worded statement on March 22, 2023, on their WeChat Official Account, Hindawi said they were "disappointed" that their journals were delisted by Clarivate but did not offer any defence. In another post on April 5, 2023, Hindawi stated they would not appeal the delisting and suggested that the authors submit their manuscripts to Wiley journals. A guest post by Flynn in the Scholarly Kitchen on April 4, 2023, mentioned that:

 “At Wiley we take full responsibility for the quality of the content we publish across our portfolio.” 

He also announced a further 1,200 retractions to be issued by Hindawi journals. Flynn disclosed to Chemistry World how they selected which publications to retract, specifically that he deployed 200 people from his editorial staff to conduct "a manual review of every single paper that we thought may have been compromised'".

Impact on authors

Many authors published in Hindawi journals because they had the cachet of being listed in scholarly databases. One author of an article published in February 2023 in Oxidative Medicine and Cellular Longevity distributed email templates she drafted to others via an instant messaging software, encouraging them to ask Hindawi to work with Clarivate to index papers with publication dates before March 19, 2023. Anonymous sources described the chaos of Hindawi's customer service in late March 2023. Many people complained that Hindawi never responded to their emails. On the other hand, one author received a response from Oxidative Medicine and Cellular Longevity (omcl@hindawi.com), even though his complaint was about an article in another journal. Some authors of accepted manuscripts complained that Hindawi delayed their requests to withdraw their submissions, and feared that manuscripts would be accidentally published. 

Other authors turned on the sleuths who had exposed paper mill activity on PubPeer, describing their activities as "social media-related PubPeer extortion". Jincheng Wang from the Nanjing Drum Tower Hospital, who had published in a compromised special issue, suggested that the intent of those who posted comments was to blackmail the authors, under the threat of translating publicly available comments into Chinese and then posting them on social media in China. He encouraged authors to report these comments to the moderators.

Impact of delisting on Hindawi’s business 

Wiley did not inform investors about the retractions in Hindawi journals in their 2nd quarter report (August 1 to October 31, 2022) published on December 7, 2022. In the 3rd quarter report (November 1, 2022, to January 31, 2023) on March 9, 2023, Napack confronted the issue head-on: 

“Upon discovery, the Wiley team responded quickly, suspending the Hindawi special issues program and fixing the source of the problem by purging the external bad actors and by implementing measures to prevent this from happening again. To date, these actions include increasing editorial controls and introducing new AI-based screening tools into the editorial process. We've also been scrubbing the archive and publicly retracting any compromised articles.”

 And, 

“We put the fixes in place. We feel very good about what we've done. We are reopening the programs. And we are moving forward to clear the backlog and drive forward with our publishing program.” 

However, the statistics on publications showed that publications in special issues continued through November 2022 to January 2023. Despite the claim that the problem had been resolved, more than 200 articles published in special issues of 34 Hindawi journals in 2023 received comments on PubPeer relating to concerns about the publishing process. The release of retraction statements was also delayed. As of July 20, 2023,  retraction statements were issued on six dates, on May 24, June 21, June 28, June 29, July 12, and July 19, with 112, 559, 514, 1, 521, and 510 retractions, respectively. The total number of retracted papers, approximately 2700 including the initial 500 from 2022, substantially exceeds the 1200 mentioned by Flynn

An interesting development has been the involvement of law firms who specialize in shareholder rights litigation, such as Rosen Law Firm, Kirby McInerney LLP, Schall Law Firm, Glancy Prongay & Murray LLP, the Law Offices of Frank R. Cruz and the Law Offices of Howard G. Smith. All of these firms recently advertised that they are investigating whether Wiley issued misleading information about Hindawi to the investing public. 

In the 4th quarterly report for 2023, Napack stated that they had remedied all the problems, and were ready for “revitalization” 

“As discussed in Q3, we suspended the fast-growing special issues program after identifying a research integrity issue. This issue was the result of external misconduct by non-Wiley editors and reviewers. Essentially, Wiley decided to take a short-term hit to preserve the integrity of our journals and the value of our highly respected Wiley brand. This industry-wide issue has been widely reported on, and we believe that we now have it fully remediated in Wiley.” 

In the same report, Napack was still expressing satisfaction with Hindawi's performance since the acquisition: 

“Our expectation for Hindawi was a couple-fold. One, it would accelerate our position in that market, which it has; and that it would provide significant growth, which it has, and it will provide the ability to provide significant cascade across our portfolio that we could find homes for the many hundreds of thousands of articles that we get every year that are not published. Our expectations are the same going forward. We expect that over the next 12 to 18 months, we will be fully ramping back up. So, by '25, we're back on course with our volume growth and it should drop to the bottom line at/or we're close to the margin -- very healthy margin that it always has in our -- across all of our Open Access, but certainly across the Hindawi asset. So, the -- relative to our initial expectations, this acquisition has outperformed if you just can look aside for a minute against a very short-term thing that happened to us. But we're going to lead our way -- lead the industry out of it, and we feel very, very good about the future of our overall Open Access program.” 

So the paper mill debacle, which led to thousands of fraudulent papers being published in Hindawi journals over at least two years, is described as “a very short-term thing that happened to us” and is now “fully remediated”, was blamed on “external misconduct by non-Wiley editors and reviewers”. This leaves hanging the question of how those non-Wiley editors and reviewers not only achieved powerful positions determining what was published in Hindawi journals, but also continued to do so long after attention had been drawn to the problem by sleuths.

Regaining the market? 

Wiley is a mighty, major international publisher, and they have the potential to achieve a Hindawi revitalization, if revitalization is defined as a significant rebound in the number of papers published in Hindawi journals. The question is, which niche market does Hindawi intend to regain? Are they well-positioned to do so? Do they have any appreciation of the tension between their goal of publishing as much as possible, and the reputational costs of publishing papers that are low-quality at best and fraudulent at worst?

The sleuth Parashorea tomentella described the evolution of the niche of special issues of Hindawi journals after its acquisition in China. There is a highly competitive "first-tier" niche of authors from research universities and institutions, who have many manuscripts. China also has an extended, "second-tier" niche of authors from community colleges, polytechnics, and non-teaching hospitals. There are many, many of these potential customers, but they lack manuscripts on the one hand, and desire publications for promotion on the other. Paper mills fill the need of authors in this niche to publish and the needs of publishers to make money. It would be difficult for Hindawi to regain the first-tier niche, because most authors and institutions care about the reputation of the publisher, and even if they aren’t concerned about integrity they are spooked by unprecedented large-scale retractions. 

Hindawi is, however, in a strong position to regain the second-tier niche. First, despite all the problems, they still have many journals that are recognized by the Chinese authorities (typically those listed in databases such as WoS Core Collection and Engineering Village). Wiley has close links with those who maintain databases and may have an advantage in avoiding their journals being blacklisted. Wiley's longstanding commitment and partnership with Chinese research institutes and government stakeholders has brought them particularly close to the National Science Library, Chinese Academy of Sciences (NSL/CAS), a bureaucracy that compiles a blacklist which is recognized by many Chinese institutions. On June 16, 2023, Wiley and NSL/CAS announced the establishment of a Joint Laboratory on Scientific and Technical Journal Innovation. The press release mentioned that an important topic for the joint lab is research integrity, and Liying Yang, director of journal evaluation in NSL/CAS, introduced what NSL/CAS can do, including updating their Early Warning Journal List, which aimed to target paper mills. 

For reasons unknown, NSL/CAS has been particularly kind to Hindawi journals. NSL/CAS released their controversial Early Warning Journal List in December 2020, December 2021 and January 2023. In the latest version, the Hindawi journals Biomed Research International, Complexity, Advances in Civil Engineering, Shock and Vibration, Scientific Programming and Journal of Mathematics, which had been on the blacklist, were reinstated. Other problematic Hindawi journals were never on the blacklist. In contrast, the blacklist compiled by another Chinese bureaucracy, the Institute of Scientific and Technical Information of China (ISTIC), does not show undue goodwill toward Hindawi journals. In January and February 2023, some Chinese institutions, such as Zhejiang Gongshang University and Anhui Provincial Hospital (The First Affiliated Hospital of University of Science and Technology of China), told their employees not to submit to Hindawi journals. NSL/CAS was sending a different signal from other Chinese institutions and encouraged authors to continue submitting to Hindawi journals, although this effect was offset by Clarivate's delisting of nineteen Hindawi journals three months later. 

Hindawi’s determination to retain the second-tier niche may explain why they have continued to publish their customers' manuscripts from known paper mills. For instance, an article (now retracted) was published on March 11, 2023, long after the guest editor Kaifa Zhao had been proven to be an impostor. To take another example, the Hindawi Research Integrity Team retracted nine articles from a special issue of BioMed Research International on “Minimally Invasive Treatment Protocols in Clinical Dentistry” between November 22, 2022, and February 14, 2023, but subsequently published new articles in the same special issue on topics out of scope.

 

Figure 6. A special issue (https://www.hindawi.com/journals/bmri/si/652179/ ) of BioMed Research International published articles on a topic out of scope in between retraction statements
 

Even more alarmingly, special issues of four journals (Computational and Mathematical Methods in Medicine, Journal of Healthcare Engineering, Journal of Environmental and Public Health, and Computational Intelligence and Neuroscience) continued to publish questionable articles after Hindawi announced that the journals had closed on May 2, 2023, see e.g., https://pubpeer.com/publications/1F1263D5537A0EF96588929A60D15B ). In one weird case, an article that had been accepted 604 days previously was published in Journal of Healthcare Engineering on July 7, 2023. Perhaps this manuscript had been blocked by production processes or the Hindawi Research Integrity Team, but it was eventually published after the journal was closed. 

Reasons for pessimism 

The reason I am pessimistic is that so far Wiley's proposals to improve the publishing process for Hindawi journals have focused on the use of AI-based screening tools. Wiley has not committed to hiring more editors for Hindawi journals. As the number of submissions increases, the situation will only get worse if peer-review processes hosted by guest editors are assigned to overworked in-house editors to oversee. 

Wiley still has a chance to fix things. The first thing they should do is stop manuscripts received by external bad actors from continuing to be published. The second is to issue more retractions. I'm glad to see that in June 2023, Catriona MacCallum, the director of Open Science of Hindawi shared their approach to scaling up retractions, including focusing on manipulation of the process rather than author wrongdoing. Publisher retractions are painful for the publisher, journals, and authors, but they are necessary, and there are not enough of them. The third is to investigate the internal bad actors in an open and transparent manner. I would also encourage them to recruit more in-house editors, release the identities of the external bad actors they have found, and document the details of how internal controls failed so lessons can be learned. 

Most of us value our reputation for its own sake. As Shakespeare said in Othello: 

“Good name in man and woman, dear my lord, 

Is the immediate jewel of their souls: 

Who steals my purse steals trash; ’tis something, nothing; ’twas mine, ’tis his, and has been slave to thousands; 

But he that filches from me my good name 

Robs me of that which not enriches him, 

And makes me poor indeed” 

Indeed, as the Hindawi story shows, for a commercial organization, reputation is not just a desirable feel-good factor – it has huge financial implications. If an academic publisher like Wiley becomes known for boosting their profits by publishing screeds of arrant nonsense, their bottom line will ultimately suffer. Reputable researchers will not want their name associated with a publisher who behaves this way. If Wiley are not willing to control fraud because it is the right thing to do, they should at least recognize the importance of integrity for retaining the confidence of the academic institutions on whom they depend. 

 

Footnote 

* The author declares that there is no potential conflict of interest. The author uses a pseudonym because he/she lives in an authoritarian state and fears facing unpredictable political reprisals.


Tuesday, 11 April 2023

Papers affected by misconduct: Erratum, correction or retraction?

 

This week, Retraction Watch drew attention to a case summary of a misconduct investigation by the Office of Research Integrity (ORI) into grants and publications by Carlo Spirli, an Assistant Professor of Medicine, Department of Digestive Diseases, Yale University. This was based on an investigation conducted by Yale University plus analysis by ORI, which is reported with commendable transparency.

The conclusions were stark:

“ORI found that Respondent engaged in research misconduct by knowingly, intentionally, or recklessly falsifying and/or fabricating data included in the following four published papers, two presentations, and three grant applications submitted for PHS funds”. Details of the fabricated material in each of these sources were listed.

I suspect this investigation has been going on for a while; I could find no publications by Dr Spirli since 2019. In response to this report, he will "exclude himself voluntarily for a period of four years beginning on March 28, 2023” from contracting or subcontracting (presumably applying for grants) or serving on US Public Health Service committees. Compared to a French case that I blogged about recently this is a rather more serious outcome, though it nevertheless attracted critical comment on Twitter, and it is less severe than the measures that respondents thought appropriate for misconduct in a recently published survey of Fellows of the National Science Foundation. See Table 5, here.

My focus, here, however, is on another feature, which is similar to the French case. The report concluded that “Respondent will request that the following papers be corrected or retracted”, and then listed three articles published in Hepatology, two from 2012, and another from 2013.

Two of these have already had an ‘erratum’ published in 2022 (more details in Appendix below).

This seems inappropriate for two reasons.

First, according to Elsevier best practice guidelines, ‘an erratum refers to a correction of errors introduced to the article by the publisher’, as opposed to a ‘corrigendum’, which is a correction made on request by the author. 

Dr Spirli has an old CV online dating from 2017, in which he states he is a member of the Editorial Board of Hepatology. One wonders if this influenced the Editor who agreed to listing these two corrections as ‘Erratum’.

Second, though, the other category of ‘Corrigendum’ (i.e. Correction) also seems inappropriate here. We all make mistakes – I’ve got corrections to some of my papers, even though I try to be careful. It is all too easy to upload the wrong figure or miscompute some values when submitting a paper. If the conclusions are not affected by the error, a Correction is appropriate. But where there is a repeated pattern of falsification of data, or evidence that figures have deliberately been manipulated to fit a narrative, then a correction is not appropriate. The accompanying statements for Spirli’s ‘errata’ (see Appendix below) state that the conclusions are not affected. But the ORI report states that there was ‘reckless falsification or fabrication’ of data. Why, we ask ourselves, would an author falsify or fabricate data? The answer is obvious – to make inconclusive, inconsistent or null findings publishable. If the findings were solid in showing a desired result, there would be no need to engage in fraud. And if an author has shown a repeated tendency to engage in fraud, how can we trust the other data in their papers?

So this is a plea to ORI, CNRS, and other institutions, as well as editors, to start being more robust about the need for retraction of articles when misconduct has been demonstrated. Trying to ‘correct’ fraudulent articles is like trying to cut out a bad section from a rotting fish. The whole thing needs to be thrown away if you want to get rid of the stink.

Appendix

May 4 2022, Erratum to Spirli et al (2012a), Hepatology 2012;56:2363-74. doi: 10.1002/hep.25872

In reference to Spirli et al.,[1] we have become aware of possible errors in Figures 4C, and 5 A, B, and C. Forensic analysis concluded that in Figure 4C, the Actin blot appears to have been spliced and replicated. Therefore, the readings of CC3 as an index of apoptosis induced by Sorafenib are inconclusive. In Figure 5A, splicing is also present in Figure 5A (lane 1 and 12) and 5B (lane 12). These figures intend to show the paradoxical effect of Sorafenib on B-Raf and Raf-1 activity in WT and PC2-defective cells. The phenomenon remains valid, as shown in supplementary Figure 5, where exposure to Raf265, a Raf inhibitor with similar mechanism of Sorafenib generated a similar paradoxical effect. In Figure 5C there is a splice between lines 4 and 6 (effect of the higher concentration-10 μM- sorafenib in PKI treated cells). However, the observation that inhibition of cAMP/PKA with PKI prevents the paradoxical effect of Sorafenib on pERK and proliferation as shown in Figure 6 remains valid and is consistent with the in vivo finding. We believe that within the above limitations, the results and interpretation of the paper remain valid.

In addition to the four problematic figures (‘possible errors’) noted here, the ORI report mentions problems with Figures 3 and 6.

April 17 2022, Erratum to Spirli et al (2012b), Hepatology 2012;55(3):856-68. doi:10.1002/hep.24723

In reference to Spirli et al.,[1] we have become aware of an error in Figure 6A. This figure is intended to show that ER Calcium depletion (in this case using thapsigargin, an inhibitor of SERCA, the pump that allows ER Calcium entry) results in activation of the ERK pathway. The blot shows an example of Western blots from which the averages between phosphorylated ERK and total ERK shown in the bar graphs are then calculated. Forensic analysis concluded that Figure 6A contains lines seemingly duplicated for re-use in separate groups, as the bottom line 1–3 appears the same as lines 4–6. As such this figure should be considered erroneous (or falsified). However, reducing ER Calcium by another mean (chelation by TPEN) still increases ERK phosphorylation, and thus the results and interpretation of the paper remain valid.

17 June 2022, Retraction of Spirli et al (2015), Hepatology 2015 Dec;62(6):1828-39. doi: 10.1002/hep.28138.

The retraction has been agreed upon due to recently verified concerns regarding data authenticity rendering the conclusions uncertain. Several figures included in the article were found to have been falsified.

One can see from the ORI report that this one had so many figure manipulations that it was beyond help. It is the only paper in the report that had been flagged (by an anonymous commenter) on PubPeer.

 

Finally - please note that I welcome civil and on-topic comments, but they may take a while to appear, as comments are moderated to prevent spam.

 

Thursday, 30 March 2023

Open letter to CNRS

Need for transparent and robust response when research misconduct is found

Scroll down for update on correspondence with CNRS Scientific Integrity Officer, 30th March 2023.

(French translation available in Appendix 3 of this document)

This Open Letter is prompted by an article in Le Monde describing an investigation into alleged malpractice at a chemistry lab in CNRS-Université Sorbonne Paris Nord and the subsequent report into the case by CNRS. The signatories are individuals from different institutions who have been involved in investigations of research misconduct in different disciplines, all concerned that the same story is repeated over and over when someone identifies unambiguous evidence of data manipulation.  Quite simply, the response by institutions, publishers and funders is typically slow, opaque and inadequate, and is biased in favour of the accused, paying scant attention to the impact on those who use research, and placing whistleblowers in a difficult position.

 

The facts in this case are clear. More than 20 scientific articles from the lab of one principal investigator  have been shown to contain recycled and doctored graphs and electron microscopy images. That is, results from different experiments that should have distinctive results are illustrated by identical figures, with changes made to the axis legends by copying and pasting numbers on top of previous numbers.

 

Everyone is fallible, and no scientist should be accused of malpractice when honest errors are committed. We need also to be aware of the possibility of accusations made in bad faith by those with an axe to grind. However, there comes a point when there is a repeated pattern of errors for a prolonged period for which there is no innocent explanation. This point is surely reached here: the problematic data are well-documented in a number of PubPeer comments on the articles (see links in Appendix 1 of this document).

 

The response by CNRS to this case, as explained in their report (see Appendix 2 of this document), was to request correction rather than retraction of what were described as “shortcomings and errors”, to accept the scientist’s account that there was no intentionality, despite clear evidence of a remarkable amount of manipulation and reuse of figures; a disciplinary sanction of exclusion from duties was imposed for just one month. 

 

So what should happen when fraud is suspected?  We propose that there should be a prompt investigation, with all results transparently reported. Where there are serious errors in the scientific record, then the research articles should immediately be retracted, any research funding used for fraudulent research should be returned to the funder, and the person responsible for the fraud should not be allowed to run a research lab or supervise students. The whistleblower should be protected from repercussions.

 

In practice, this seldom happens. Instead, we typically see, as in this case, prolonged and secret investigations by institutions, journals and/or funders. There is a strong bias to minimize the severity of malpractice, and to recommend that published work be “corrected” rather than retracted.

 

One can see why this happens. First, all of those concerned are reluctant to believe that researchers are dishonest, and are more willing to assume that the concerns have been exaggerated. It is easy to dismiss whistleblowers as deluded, overzealous or jealous of another’s success. Second, there are concerns about reputational risk to an institution if accounts of fraudulent research are publicised. And third, there is a genuine risk of litigation from those who are accused of data manipulation. So in practice, research misconduct tends to be played down.

 

However, this failure to act effectively has serious consequences:

1.   It gives credibility to fictitious results, slowing down the progress of science by encouraging others to pursue false leads. This can be particularly damaging for junior researchers who may waste years trying to build on invented findings. And in the age of big data, where results in fields such as genetics and pharmaceuticals are harvested to contribute to databases of knowledge, erroneous data pollutes the databases on which we depend.

2.   Where the research has potential for clinical or commercial application, there can be direct damage to patients or businesses.

3.   It allows those who are prepared to cheat to compete with other scientists to gain positions of influence, and so perpetuate further misconduct, while damaging the prospects of honest scientists who obtain less striking results.

4.   It is particularly destructive when data manipulation involves the Principal Investigator of a lab. This creates challenges for honest early-career scientists based in the lab where malpractice occurs – they usually have the stark options of damaging their career prospects by whistleblowing, or leaving science. Those with integrity are thus removed from the pool of active researchers. Those who remain are those who are prepared to overlook integrity in return for career security.  CNRS has a mission to support research training: it is hard to see how this can be achieved if trainees are placed in a lab where misconduct occurs.

5.   It wastes public money from research grants.

6.   It damages public trust in science and trust between scientists.

7.   It damages the reputation of the institutions, funders, journals and publishers associated with the fraudulent work.

8.   Whistleblowers, who should be praised by their institution for doing the right thing, are often made to feel that they are somehow letting the side down by drawing attention to something unpleasant. They are placed at high risk of career damage and stress, and without adequate protection by their institution, may be at risk of litigation. Some institutions have codes of conduct where failure to report an incident that gives reasonable suspicion of research misconduct is itself regarded as misconduct, yet the motivation to adhere to that code will be low if the institution is known to brush such reports under the carpet.

 

The point of this letter is not to revisit the rights and wrongs of this specific case or to promote a campaign against the scientist involved. Rather, we use this case to illustrate what we see as an institutional malaise that is widespread in scientific organisations.  We write to CNRS to express our frustration at their inadequate response to this case, and to ask that they review their disciplinary processes and consider adopting a more robust, timely and transparent process that treats data manipulation with the seriousness it deserves, and serves the needs not just of their researchers, but also of other scientists, and of the public who ultimately provide the research funding.

 

Signed by:

 

Dorothy Bishop, FRS, FBA, FMedSci, Professor of Developmental Neuropsychology (Emeritus), University of Oxford, UK.

 

Patricia Murray, Professor of Stem Cell Biology and Regenerative Medicine, University of Liverpool, UK.

 

Elisabeth Bik, PhD, Science Integrity Consultant

 

Florian Naudet, Professor of Therapeutics, Université de Rennes and Institut Universitaire de France, Paris

 

David Vaux, AO FAA, FAHMS, Honorary Fellow WEHI, & Emeritus Professor University of Melbourne, Australia

 

David A. Sanders, Department of Biological Sciences, Purdue University, USA.

 

Ben W. Mol, Professor of Obstetrics and Gynecology, Melbourne, Australia

 

Timothy D. Clark, PhD, School of Life & Environmental Sciences, Deakin University, Geelong, Australia

 

David Robert Grimes, PhD, School of Medicine, Trinity College Dublin, Ireland

 

Fredrik Jutfelt, Professor of Animal Physiology, Norwegian University of Science and Technology, Trondheim, Norway

 

Nicholas J. L. Brown, PhD, Linnaeus University, Sweden

 

Dominique Roche, Marie Skłodowska-Curie Global FellowD, Institut de biologie, Université de Neuchâtel, Switzerland

 

Lex M. Bouter, Professor Emeritus of Methodology and Integrity, Amsterdam University Medical Center and Vrije Universiteit, Amsterdam, The Netherlands

 

Josefin Sundin, PhD, Department of Aquatic Resources, Swedish University of Agricultural Sciences, Sweden

 

Nick Wise, PhD, Engineering Department, University of Cambridge, UK

 

Guillaume Cabanac, Professor of Computer Science, Université Toulouse 3 – Paul Sabatier and Institut Universitaire de France

 

Iain Chalmers, DSc, MD, FRCPE, Centre for Evidence-Based Medicine, University of Oxford.

 

Response from CNRS, received 28th Feb 2023. 

 French version below. Version en français plus bas. ======================================== 

Dear Colleagues, I have read the open letter you sent me by email on February 22, entitled "Need for transparent and robust response when research misconduct is found". 

I am very surprised that you did not think it necessary to contact the CNRS before publishing this open letter. You are obviously not familiar, or at least very unfamiliar, with CNRS policy and procedures regarding scientific integrity. 

The CNRS deals with these essential issues without any complacency, but tries to be fair and to ensure that the sanctions are proportional to the misconduct committed, while respecting the rules of the French civil service. 

 Your letter mixes generalities about the so-called actions of scientific institutions with paragraphs that apply, perhaps, to the CNRS. If you wish to know how scientific misconduct is handled at the CNRS, I invite you to contact our scientific integrity officer, Rémy Mosseri 

Kind regards, 

Antoine Petit ================== 

Professer Antoine Petit CNRS CEO 

======================================== 

Chers et chères collègues, J’ai pris connaissance de la lettre ouverte que vous m’avez adressée par courriel le 22 février dernier dont le titre est « Nécessité d'une réponse transparente et robuste en cas de découverte de manquements à l’intégrité scientifique ». 

Je suis très étonné que vous n’ayez pas jugé utile de prendre contact avec le CNRS avant de publier cette lettre ouverte. Vous ne connaissez visiblement pas, ou au minimum très mal, la politique et les procédures du CNRS en ce qui concerne l’intégrité scientifique. 

 Le CNRS traite ces questions essentielles sans aucune complaisance mais en essayant d’être justes et que les sanctions soient proportionnelles aux fautes commises, tout en respectant les règles de la fonction publique française. 

Votre lettre mélange des généralités sur les soi-disant agissement des institutions scientifiques et des paragraphes qui s’appliquent, peut-être, au CNRS. Si vous souhaitez savoir comment les méconduites scientifiques sont traitées au CNRS, je vous invite à prendre contact avec notre référent intégrité scientifique, Rémy Mosseri 

 Bien à vous, ================ 

Antoine Petit CNRS Président - Directeur général 

 

 

Update: March 30th 2023

As recommended by Prof Petit, we contacted Dr Rémy Mosseri, Scientific Integrity Officer, with some specific questions about how research integrity is handled at CNRS. The ensuing correspondence is provided here: 

13th March 2023   

 Dear Dr Mosseri

As you will have seen, Prof Antoine Petit replied to our previous open letter (which you were copied into) concerning the case of research misconduct at Université Sorbonne Paris Nord, featured in Le Monde. We can add that since drawing attention to this case, additional serious concerns have been raised about papers of this group:   

https://pubpeer.com/publications/0FA5031C555737851A865644B55B66. (comments #2 and #3)  

https://pubpeer.com/publications/67AC8D60812782300BB58D6D32E67D  

 https://pubpeer.com/publications/274206B58670596FD557A1E71D41FF  

 https://pubpeer.com/publications/E1BEDDC613F4DE1F0DBF68F2CE6C57

At the suggestion of Prof Petit, we are writing now to request further information about the processes used to evaluate research integrity by CNRS.

The specific points where it would be helpful to have clarification are:  

1. When problems are repeated across many papers, what are the criteria for concluding that there are “shortcomings and errors” rather than misconduct or fraud. Are specific definitions used by CNRS?   

2. When an investigation concludes that a publication contains material that is fabricated, falsified or plagiarised, what criteria are used to determine a recommendation that the paper be corrected, retracted, or other?  

 3. Where it is concluded that a paper should be corrected or retracted, does CNRS require that the notice of retraction/correction mention the reason for this action?   

4. We note that some CNRS reports into research misconduct have been published (https://mis.cnrs.fr/rapports/). What criteria are used to determine whether reports are confidential or public?   

5. What training do CNRS staff and students have in research integrity, and are specific training measures implemented in cases where misconduct has been confirmed?   

6. Do CNRS rules specify that a failure to report suspected research misconduct is itself misconduct?   

7. What measures does CNRS take to protect whistleblowers?   

 (signed by Dorothy Bishop + signatories of original open letter)  


15th March 2023   

Dear collegues,

I will be pleased to try to answer (as best as possible, some questions are more complicate than others) your questions (I guess in english). Due to overbusiness, please forgive me if this is not done immediately. But I expect being able to answer within 2 weeks max.

I would prefer, if you can agree with that, that these answers stay informal. In other word, this would not be considered as an interview or an official document from me, from which I may find in the future selected part reproduced on the internet, without possibly (once it is in the net) the precise context in which they have been written. Would you agree on that?

There are some points in your open letters with which I may disagree, as far as CNRS is concerned. The difficulty is that you wrote an open letter to CNRS, but included general criticisms addressed apparently to the general academic IS treatment (I guess not only focused on CNRS, and even not only to France). If you are interested by my remarks, beside your own questions, I may formulate them. If interested, we could also have a more open and reactive discussion on that, by zoom.

In the meantime, please find enclosed a recent summary (in english) of the MIS activity, which may already get you interested.

Yours sincerely   

Rémy Mosseri  


17th March

Dear Dr Mosseri   

Thanks for your prompt reply, and the interesting MIS summary. We do of course understand that you need time to reply. We would prefer to have a formal response from you, in your role as integrity officer, relating to the specific questions we have raised. The reasons we are writing to you is because of concerns about how CNRS has responded to the case reported in Le Monde. These are of particular interest to the signatories because of our prior experience with institutional responses to cases of fraud. There is considerable international public interest in these matters. I hope you would be able to respond to our questions in a way that we could share publicly. I am happy to give an undertaking that I would not knowingly misrepresent anything you say, or present it out of context. 

Yours sincerely   

Dorothy Bishop, FRS, FMedSci, FBA  


18th March   

Dear Mrs Bishop,

I return to you about two points

1) You may know that (i) I must apply a strict confidentiality about the cases we treat, (ii) I cannot start (decide alone) an investigation without having a documented allegation that I can then send to the targetted persons asking for a reply. You mention in your letters 4 pubpeer new posts concerning the case discussed in a french newspaper last december. It is not clear for me whether you considered that mentioning these posts was a formal allegation or just an information. In the first case, I must tell you that just sending to a pubpeer post is not considered by us as a formal allegation. If you ask for an investigation to be opened on new elements, you are invited write and send us a detailed allegation.

2) I have a problem with your answer. I am always very interested to discuss and present the rules underlying our practice (and my impression is that you miss informations about them), and even to listen to propositions to improve them. I proposed an unformal open discussion with your group, even by zoom, in which I could expose the coherence underlying our action, and the rules themselves. Notice that we claimed from the start (2018 for the MIS) that these rules are certainly perfectible; I also had in mind to explain why I may object to some statements of your letter. You do not seem interested by all this. By the way, I find quite questionable that your questions (which are certainly interesting, and do not cover the full subject) are sent to us after you opened your public campaign, and not before (as far as I know, but I may be wrong, no prior contact has been taken by your group with CNRS). I therefore do not think that presenting the coherence of our action can rely on your future decisions.

So we will probably proceed differently. Although some of these informations are already present (in french) in our website, we will write a public document, posted on our web site, in french and english, detailing our rules and the principles guiding our action. We already had this in mind, but did not find time to dot it (in particular having informations written in english). Most of your questions (and many others) should be answered in this more global document, and you will therefore be free to use this information (by citing the whole text).

Sincerely yours

Rémy Mosseri  


28th March

Dear Dr Mosseri Your suggestion of creating a global document in response to the questions we raised is most welcome. Thank you.   

The information about MIS is also very welcome. Thank you also for explaining the situation with regard to allegations of malpractice. This does make clear the distinctive characteristics of the CNRS procedures in investigating integrity. It is understandable that a formal allegation might be needed to initiate new investigation, to avoid CNRS being overwhelmed by information or by trivial complaints, and to protect employees from malicious actors; it was rather surprising, though, to hear that you would ignore additional evidence relating to an existing case, especially when brought to you by serious integrity experts. Given that the research that is the topic of the case is clinically relevant, the malpractice has potential to be damaging to public health, as well as to the research community, to junior scientists, to whistleblowers, and not least to the reputation of CNRS. It would seem a matter of some urgency to remedy matters if a CNRS-funded research group is publishing manipulated data in multiple papers.   

 To avoid complications of co-ordinating numerous people, I hereby make a formal request in my own name, specifically asking you to investigate a number of new issues that have arisen since your original investigation. I am ccing to the Research Integrity officer at Université Sorbonne Paris Nord, who I assume would also need to be involved in any investigation.   

Here are specific concerns regarding publications from the Laboratoire de Réactivité de Surfaces, UMR CNRS 7197 and CNRS, UMR 7244, CSPBAT, Laboratoire de Chimie, Structures et Propriétés de Biomateriaux et d'Agents Therapeutiques. The evidence of data fabrication and questionable methods is evident in the published papers and is described in the linked PubPeer comments, which I briefly summarise here:   

https://pubpeer.com/publications/684C7691DAAD7FCD6B7E9BBCE5346C. Rectangles placed over images showing data, obscuring some regions.   

In https://pubpeer.com/publications/99DFA69EC0222D3C40477DE9B8F8D6 Concerns expressed about inadequate corrections of earlier work. This suggests that where CNRS has proposed correction of problematic work, it has not confirmed that this is satisfactory.   

https://pubpeer.com/publications/E1BEDDC613F4DE1F0DBF68F2CE6C57 An expert, Elisabeth Bik, has identified evidence of cut-and-paste of areas in photos of tumours.   

https://pubpeer.com/publications/274206B58670596FD557A1E71D41FF Repeated plot in different publications.   

https://pubpeer.com/publications/1076593A614D44E5019C69C642282B Another unsatisfactory correction, where inconsistencies remain in the paper   

https://pubpeer.com/publications/0FA5031C555737851A865644B55B66. In addition to reuse of the same histograms across multiple papers, already noted by Raphael Levy, further comments have been added by Elisabeth Bik noting evidence of duplication of regions of plots within figures   

https://pubpeer.com/publications/EA48A476C8B55E382AFD4BD56BDEC6 Yet another correction that does not satisfactorily deal with concerns.   

https://pubpeer.com/publications/C9081BBA3DCD96D61FC7E1C22274FA And another correction that seems to raise more questions.   

https://pubpeer.com/publications/36885F09E68EA7D5E881C625BFD998. Curves that should describe experimental data appear to be generated by formula, and have identical noise patterns.   

https://pubpeer.com/publications/FA4ABD243E8518B6C72024EDB98DFA#. Curves that should describe experimental data appear to be generated by formula, and have identical noise patterns.   

https://pubpeer.com/publications/DE9875DC8BA22466DB129179506638 A retracted paper appears to have been republished with only minor changes.   

https://pubpeer.com/publications/5569A968DD6668A7FBCDD3A355507E Inconsistencies in reported size of nanoparticles and the figures.   

Please note that this list is likely to grow, as I have been told of concerns regarding other publications that are still being compiled. It would be helpful if your committee could monitor these proactively on PubPeer, rather than relying on sleuths to bring them to your attention with a formal allegation.   

I am sorry we disagree about the benefits of confidentiality vs. transparency. I appreciate that you may not wish to communicate further with me, because I do intend to make correspondence with CNRS public, as I think this is in the public interest. This is not a comfortable situation, but I hope that in the long term further scrutiny of cases of misconduct and institutional responses to them might help us reach a rapprochement about the appropriate methods to adopt in such cases.   

Yours sincerely   

Dorothy Bishop

28th March 2023   

Dear Mrs Bishop,   

I understand that you do not agree with our imperative rules of confidentiality, and with the form under which an allegation should be sent to us in order to possibly open an investigation. It seems that, as a general principle, emails have the same status as private correspondance, and should therefore not be tranferred to third parties without the consent of the author of the email. I politely answered to your emails, but had not in mind that these answers would be made public without my consent. Knowing that, do what your personal ethics tells you...   

Yours sincerely   

 Rémy Mosseri 

Afterword   

My personal ethics tell me to publish this correspondence, even though Dr Mosseri feels this is inappropriate. There are situations when confidentiality is important, especially early in an investigation when allegations are made and information is discussed that could affect a scientist’s reputation, before the validity of the allegations is established. However, none of the matters discussed with Dr Mosseri are of this nature. Our questions to him were general ones about CNRS procedures. We rejected his suggestion that we should discuss these informally, and asked instead for a formal response by him in his role as Scientific Integrity Officer. Insofar as evidence of scientific misconduct is mentioned in our correspondence, this relates to a case that has already been discussed in a report that is in the public domain, and all the PubPeer comments are also in the public domain.

Ethical judgements involve weighing up conflicting interests. As noted in my last email, in this case, research malpractice has the potential to be damaging to public health, as well as to the research community, to junior scientists, to whistleblowers, and not least to the reputation of CNRS. I think it is more important that we have transparency about the response when data manipulation has been demonstrated by scientists funded by CNRS, than that I take into account Dr Mosseri’s sensitivities.

Note re comments on this blog. Comments are moderated to protect against spam. There may be some delay before they appear; if this is a concern, please email me. I generally publish comments provided they are on topic, coherent and not libellous.