Showing posts with label science. Show all posts
Showing posts with label science. Show all posts

Thursday, 30 March 2023

Open letter to CNRS

Need for transparent and robust response when research misconduct is found

Scroll down for update on correspondence with CNRS Scientific Integrity Officer, 30th March 2023.

(French translation available in Appendix 3 of this document)

This Open Letter is prompted by an article in Le Monde describing an investigation into alleged malpractice at a chemistry lab in CNRS-Université Sorbonne Paris Nord and the subsequent report into the case by CNRS. The signatories are individuals from different institutions who have been involved in investigations of research misconduct in different disciplines, all concerned that the same story is repeated over and over when someone identifies unambiguous evidence of data manipulation.  Quite simply, the response by institutions, publishers and funders is typically slow, opaque and inadequate, and is biased in favour of the accused, paying scant attention to the impact on those who use research, and placing whistleblowers in a difficult position.

 

The facts in this case are clear. More than 20 scientific articles from the lab of one principal investigator  have been shown to contain recycled and doctored graphs and electron microscopy images. That is, results from different experiments that should have distinctive results are illustrated by identical figures, with changes made to the axis legends by copying and pasting numbers on top of previous numbers.

 

Everyone is fallible, and no scientist should be accused of malpractice when honest errors are committed. We need also to be aware of the possibility of accusations made in bad faith by those with an axe to grind. However, there comes a point when there is a repeated pattern of errors for a prolonged period for which there is no innocent explanation. This point is surely reached here: the problematic data are well-documented in a number of PubPeer comments on the articles (see links in Appendix 1 of this document).

 

The response by CNRS to this case, as explained in their report (see Appendix 2 of this document), was to request correction rather than retraction of what were described as “shortcomings and errors”, to accept the scientist’s account that there was no intentionality, despite clear evidence of a remarkable amount of manipulation and reuse of figures; a disciplinary sanction of exclusion from duties was imposed for just one month. 

 

So what should happen when fraud is suspected?  We propose that there should be a prompt investigation, with all results transparently reported. Where there are serious errors in the scientific record, then the research articles should immediately be retracted, any research funding used for fraudulent research should be returned to the funder, and the person responsible for the fraud should not be allowed to run a research lab or supervise students. The whistleblower should be protected from repercussions.

 

In practice, this seldom happens. Instead, we typically see, as in this case, prolonged and secret investigations by institutions, journals and/or funders. There is a strong bias to minimize the severity of malpractice, and to recommend that published work be “corrected” rather than retracted.

 

One can see why this happens. First, all of those concerned are reluctant to believe that researchers are dishonest, and are more willing to assume that the concerns have been exaggerated. It is easy to dismiss whistleblowers as deluded, overzealous or jealous of another’s success. Second, there are concerns about reputational risk to an institution if accounts of fraudulent research are publicised. And third, there is a genuine risk of litigation from those who are accused of data manipulation. So in practice, research misconduct tends to be played down.

 

However, this failure to act effectively has serious consequences:

1.   It gives credibility to fictitious results, slowing down the progress of science by encouraging others to pursue false leads. This can be particularly damaging for junior researchers who may waste years trying to build on invented findings. And in the age of big data, where results in fields such as genetics and pharmaceuticals are harvested to contribute to databases of knowledge, erroneous data pollutes the databases on which we depend.

2.   Where the research has potential for clinical or commercial application, there can be direct damage to patients or businesses.

3.   It allows those who are prepared to cheat to compete with other scientists to gain positions of influence, and so perpetuate further misconduct, while damaging the prospects of honest scientists who obtain less striking results.

4.   It is particularly destructive when data manipulation involves the Principal Investigator of a lab. This creates challenges for honest early-career scientists based in the lab where malpractice occurs – they usually have the stark options of damaging their career prospects by whistleblowing, or leaving science. Those with integrity are thus removed from the pool of active researchers. Those who remain are those who are prepared to overlook integrity in return for career security.  CNRS has a mission to support research training: it is hard to see how this can be achieved if trainees are placed in a lab where misconduct occurs.

5.   It wastes public money from research grants.

6.   It damages public trust in science and trust between scientists.

7.   It damages the reputation of the institutions, funders, journals and publishers associated with the fraudulent work.

8.   Whistleblowers, who should be praised by their institution for doing the right thing, are often made to feel that they are somehow letting the side down by drawing attention to something unpleasant. They are placed at high risk of career damage and stress, and without adequate protection by their institution, may be at risk of litigation. Some institutions have codes of conduct where failure to report an incident that gives reasonable suspicion of research misconduct is itself regarded as misconduct, yet the motivation to adhere to that code will be low if the institution is known to brush such reports under the carpet.

 

The point of this letter is not to revisit the rights and wrongs of this specific case or to promote a campaign against the scientist involved. Rather, we use this case to illustrate what we see as an institutional malaise that is widespread in scientific organisations.  We write to CNRS to express our frustration at their inadequate response to this case, and to ask that they review their disciplinary processes and consider adopting a more robust, timely and transparent process that treats data manipulation with the seriousness it deserves, and serves the needs not just of their researchers, but also of other scientists, and of the public who ultimately provide the research funding.

 

Signed by:

 

Dorothy Bishop, FRS, FBA, FMedSci, Professor of Developmental Neuropsychology (Emeritus), University of Oxford, UK.

 

Patricia Murray, Professor of Stem Cell Biology and Regenerative Medicine, University of Liverpool, UK.

 

Elisabeth Bik, PhD, Science Integrity Consultant

 

Florian Naudet, Professor of Therapeutics, Université de Rennes and Institut Universitaire de France, Paris

 

David Vaux, AO FAA, FAHMS, Honorary Fellow WEHI, & Emeritus Professor University of Melbourne, Australia

 

David A. Sanders, Department of Biological Sciences, Purdue University, USA.

 

Ben W. Mol, Professor of Obstetrics and Gynecology, Melbourne, Australia

 

Timothy D. Clark, PhD, School of Life & Environmental Sciences, Deakin University, Geelong, Australia

 

David Robert Grimes, PhD, School of Medicine, Trinity College Dublin, Ireland

 

Fredrik Jutfelt, Professor of Animal Physiology, Norwegian University of Science and Technology, Trondheim, Norway

 

Nicholas J. L. Brown, PhD, Linnaeus University, Sweden

 

Dominique Roche, Marie Skłodowska-Curie Global FellowD, Institut de biologie, Université de Neuchâtel, Switzerland

 

Lex M. Bouter, Professor Emeritus of Methodology and Integrity, Amsterdam University Medical Center and Vrije Universiteit, Amsterdam, The Netherlands

 

Josefin Sundin, PhD, Department of Aquatic Resources, Swedish University of Agricultural Sciences, Sweden

 

Nick Wise, PhD, Engineering Department, University of Cambridge, UK

 

Guillaume Cabanac, Professor of Computer Science, Université Toulouse 3 – Paul Sabatier and Institut Universitaire de France

 

Iain Chalmers, DSc, MD, FRCPE, Centre for Evidence-Based Medicine, University of Oxford.

 

Response from CNRS, received 28th Feb 2023. 

 French version below. Version en français plus bas. ======================================== 

Dear Colleagues, I have read the open letter you sent me by email on February 22, entitled "Need for transparent and robust response when research misconduct is found". 

I am very surprised that you did not think it necessary to contact the CNRS before publishing this open letter. You are obviously not familiar, or at least very unfamiliar, with CNRS policy and procedures regarding scientific integrity. 

The CNRS deals with these essential issues without any complacency, but tries to be fair and to ensure that the sanctions are proportional to the misconduct committed, while respecting the rules of the French civil service. 

 Your letter mixes generalities about the so-called actions of scientific institutions with paragraphs that apply, perhaps, to the CNRS. If you wish to know how scientific misconduct is handled at the CNRS, I invite you to contact our scientific integrity officer, Rémy Mosseri 

Kind regards, 

Antoine Petit ================== 

Professer Antoine Petit CNRS CEO 

======================================== 

Chers et chères collègues, J’ai pris connaissance de la lettre ouverte que vous m’avez adressée par courriel le 22 février dernier dont le titre est « Nécessité d'une réponse transparente et robuste en cas de découverte de manquements à l’intégrité scientifique ». 

Je suis très étonné que vous n’ayez pas jugé utile de prendre contact avec le CNRS avant de publier cette lettre ouverte. Vous ne connaissez visiblement pas, ou au minimum très mal, la politique et les procédures du CNRS en ce qui concerne l’intégrité scientifique. 

 Le CNRS traite ces questions essentielles sans aucune complaisance mais en essayant d’être justes et que les sanctions soient proportionnelles aux fautes commises, tout en respectant les règles de la fonction publique française. 

Votre lettre mélange des généralités sur les soi-disant agissement des institutions scientifiques et des paragraphes qui s’appliquent, peut-être, au CNRS. Si vous souhaitez savoir comment les méconduites scientifiques sont traitées au CNRS, je vous invite à prendre contact avec notre référent intégrité scientifique, Rémy Mosseri 

 Bien à vous, ================ 

Antoine Petit CNRS Président - Directeur général 

 

 

Update: March 30th 2023

As recommended by Prof Petit, we contacted Dr Rémy Mosseri, Scientific Integrity Officer, with some specific questions about how research integrity is handled at CNRS. The ensuing correspondence is provided here: 

13th March 2023   

 Dear Dr Mosseri

As you will have seen, Prof Antoine Petit replied to our previous open letter (which you were copied into) concerning the case of research misconduct at Université Sorbonne Paris Nord, featured in Le Monde. We can add that since drawing attention to this case, additional serious concerns have been raised about papers of this group:   

https://pubpeer.com/publications/0FA5031C555737851A865644B55B66. (comments #2 and #3)  

https://pubpeer.com/publications/67AC8D60812782300BB58D6D32E67D  

 https://pubpeer.com/publications/274206B58670596FD557A1E71D41FF  

 https://pubpeer.com/publications/E1BEDDC613F4DE1F0DBF68F2CE6C57

At the suggestion of Prof Petit, we are writing now to request further information about the processes used to evaluate research integrity by CNRS.

The specific points where it would be helpful to have clarification are:  

1. When problems are repeated across many papers, what are the criteria for concluding that there are “shortcomings and errors” rather than misconduct or fraud. Are specific definitions used by CNRS?   

2. When an investigation concludes that a publication contains material that is fabricated, falsified or plagiarised, what criteria are used to determine a recommendation that the paper be corrected, retracted, or other?  

 3. Where it is concluded that a paper should be corrected or retracted, does CNRS require that the notice of retraction/correction mention the reason for this action?   

4. We note that some CNRS reports into research misconduct have been published (https://mis.cnrs.fr/rapports/). What criteria are used to determine whether reports are confidential or public?   

5. What training do CNRS staff and students have in research integrity, and are specific training measures implemented in cases where misconduct has been confirmed?   

6. Do CNRS rules specify that a failure to report suspected research misconduct is itself misconduct?   

7. What measures does CNRS take to protect whistleblowers?   

 (signed by Dorothy Bishop + signatories of original open letter)  


15th March 2023   

Dear collegues,

I will be pleased to try to answer (as best as possible, some questions are more complicate than others) your questions (I guess in english). Due to overbusiness, please forgive me if this is not done immediately. But I expect being able to answer within 2 weeks max.

I would prefer, if you can agree with that, that these answers stay informal. In other word, this would not be considered as an interview or an official document from me, from which I may find in the future selected part reproduced on the internet, without possibly (once it is in the net) the precise context in which they have been written. Would you agree on that?

There are some points in your open letters with which I may disagree, as far as CNRS is concerned. The difficulty is that you wrote an open letter to CNRS, but included general criticisms addressed apparently to the general academic IS treatment (I guess not only focused on CNRS, and even not only to France). If you are interested by my remarks, beside your own questions, I may formulate them. If interested, we could also have a more open and reactive discussion on that, by zoom.

In the meantime, please find enclosed a recent summary (in english) of the MIS activity, which may already get you interested.

Yours sincerely   

Rémy Mosseri  


17th March

Dear Dr Mosseri   

Thanks for your prompt reply, and the interesting MIS summary. We do of course understand that you need time to reply. We would prefer to have a formal response from you, in your role as integrity officer, relating to the specific questions we have raised. The reasons we are writing to you is because of concerns about how CNRS has responded to the case reported in Le Monde. These are of particular interest to the signatories because of our prior experience with institutional responses to cases of fraud. There is considerable international public interest in these matters. I hope you would be able to respond to our questions in a way that we could share publicly. I am happy to give an undertaking that I would not knowingly misrepresent anything you say, or present it out of context. 

Yours sincerely   

Dorothy Bishop, FRS, FMedSci, FBA  


18th March   

Dear Mrs Bishop,

I return to you about two points

1) You may know that (i) I must apply a strict confidentiality about the cases we treat, (ii) I cannot start (decide alone) an investigation without having a documented allegation that I can then send to the targetted persons asking for a reply. You mention in your letters 4 pubpeer new posts concerning the case discussed in a french newspaper last december. It is not clear for me whether you considered that mentioning these posts was a formal allegation or just an information. In the first case, I must tell you that just sending to a pubpeer post is not considered by us as a formal allegation. If you ask for an investigation to be opened on new elements, you are invited write and send us a detailed allegation.

2) I have a problem with your answer. I am always very interested to discuss and present the rules underlying our practice (and my impression is that you miss informations about them), and even to listen to propositions to improve them. I proposed an unformal open discussion with your group, even by zoom, in which I could expose the coherence underlying our action, and the rules themselves. Notice that we claimed from the start (2018 for the MIS) that these rules are certainly perfectible; I also had in mind to explain why I may object to some statements of your letter. You do not seem interested by all this. By the way, I find quite questionable that your questions (which are certainly interesting, and do not cover the full subject) are sent to us after you opened your public campaign, and not before (as far as I know, but I may be wrong, no prior contact has been taken by your group with CNRS). I therefore do not think that presenting the coherence of our action can rely on your future decisions.

So we will probably proceed differently. Although some of these informations are already present (in french) in our website, we will write a public document, posted on our web site, in french and english, detailing our rules and the principles guiding our action. We already had this in mind, but did not find time to dot it (in particular having informations written in english). Most of your questions (and many others) should be answered in this more global document, and you will therefore be free to use this information (by citing the whole text).

Sincerely yours

Rémy Mosseri  


28th March

Dear Dr Mosseri Your suggestion of creating a global document in response to the questions we raised is most welcome. Thank you.   

The information about MIS is also very welcome. Thank you also for explaining the situation with regard to allegations of malpractice. This does make clear the distinctive characteristics of the CNRS procedures in investigating integrity. It is understandable that a formal allegation might be needed to initiate new investigation, to avoid CNRS being overwhelmed by information or by trivial complaints, and to protect employees from malicious actors; it was rather surprising, though, to hear that you would ignore additional evidence relating to an existing case, especially when brought to you by serious integrity experts. Given that the research that is the topic of the case is clinically relevant, the malpractice has potential to be damaging to public health, as well as to the research community, to junior scientists, to whistleblowers, and not least to the reputation of CNRS. It would seem a matter of some urgency to remedy matters if a CNRS-funded research group is publishing manipulated data in multiple papers.   

 To avoid complications of co-ordinating numerous people, I hereby make a formal request in my own name, specifically asking you to investigate a number of new issues that have arisen since your original investigation. I am ccing to the Research Integrity officer at Université Sorbonne Paris Nord, who I assume would also need to be involved in any investigation.   

Here are specific concerns regarding publications from the Laboratoire de Réactivité de Surfaces, UMR CNRS 7197 and CNRS, UMR 7244, CSPBAT, Laboratoire de Chimie, Structures et Propriétés de Biomateriaux et d'Agents Therapeutiques. The evidence of data fabrication and questionable methods is evident in the published papers and is described in the linked PubPeer comments, which I briefly summarise here:   

https://pubpeer.com/publications/684C7691DAAD7FCD6B7E9BBCE5346C. Rectangles placed over images showing data, obscuring some regions.   

In https://pubpeer.com/publications/99DFA69EC0222D3C40477DE9B8F8D6 Concerns expressed about inadequate corrections of earlier work. This suggests that where CNRS has proposed correction of problematic work, it has not confirmed that this is satisfactory.   

https://pubpeer.com/publications/E1BEDDC613F4DE1F0DBF68F2CE6C57 An expert, Elisabeth Bik, has identified evidence of cut-and-paste of areas in photos of tumours.   

https://pubpeer.com/publications/274206B58670596FD557A1E71D41FF Repeated plot in different publications.   

https://pubpeer.com/publications/1076593A614D44E5019C69C642282B Another unsatisfactory correction, where inconsistencies remain in the paper   

https://pubpeer.com/publications/0FA5031C555737851A865644B55B66. In addition to reuse of the same histograms across multiple papers, already noted by Raphael Levy, further comments have been added by Elisabeth Bik noting evidence of duplication of regions of plots within figures   

https://pubpeer.com/publications/EA48A476C8B55E382AFD4BD56BDEC6 Yet another correction that does not satisfactorily deal with concerns.   

https://pubpeer.com/publications/C9081BBA3DCD96D61FC7E1C22274FA And another correction that seems to raise more questions.   

https://pubpeer.com/publications/36885F09E68EA7D5E881C625BFD998. Curves that should describe experimental data appear to be generated by formula, and have identical noise patterns.   

https://pubpeer.com/publications/FA4ABD243E8518B6C72024EDB98DFA#. Curves that should describe experimental data appear to be generated by formula, and have identical noise patterns.   

https://pubpeer.com/publications/DE9875DC8BA22466DB129179506638 A retracted paper appears to have been republished with only minor changes.   

https://pubpeer.com/publications/5569A968DD6668A7FBCDD3A355507E Inconsistencies in reported size of nanoparticles and the figures.   

Please note that this list is likely to grow, as I have been told of concerns regarding other publications that are still being compiled. It would be helpful if your committee could monitor these proactively on PubPeer, rather than relying on sleuths to bring them to your attention with a formal allegation.   

I am sorry we disagree about the benefits of confidentiality vs. transparency. I appreciate that you may not wish to communicate further with me, because I do intend to make correspondence with CNRS public, as I think this is in the public interest. This is not a comfortable situation, but I hope that in the long term further scrutiny of cases of misconduct and institutional responses to them might help us reach a rapprochement about the appropriate methods to adopt in such cases.   

Yours sincerely   

Dorothy Bishop

28th March 2023   

Dear Mrs Bishop,   

I understand that you do not agree with our imperative rules of confidentiality, and with the form under which an allegation should be sent to us in order to possibly open an investigation. It seems that, as a general principle, emails have the same status as private correspondance, and should therefore not be tranferred to third parties without the consent of the author of the email. I politely answered to your emails, but had not in mind that these answers would be made public without my consent. Knowing that, do what your personal ethics tells you...   

Yours sincerely   

 Rémy Mosseri 

Afterword   

My personal ethics tell me to publish this correspondence, even though Dr Mosseri feels this is inappropriate. There are situations when confidentiality is important, especially early in an investigation when allegations are made and information is discussed that could affect a scientist’s reputation, before the validity of the allegations is established. However, none of the matters discussed with Dr Mosseri are of this nature. Our questions to him were general ones about CNRS procedures. We rejected his suggestion that we should discuss these informally, and asked instead for a formal response by him in his role as Scientific Integrity Officer. Insofar as evidence of scientific misconduct is mentioned in our correspondence, this relates to a case that has already been discussed in a report that is in the public domain, and all the PubPeer comments are also in the public domain.

Ethical judgements involve weighing up conflicting interests. As noted in my last email, in this case, research malpractice has the potential to be damaging to public health, as well as to the research community, to junior scientists, to whistleblowers, and not least to the reputation of CNRS. I think it is more important that we have transparency about the response when data manipulation has been demonstrated by scientists funded by CNRS, than that I take into account Dr Mosseri’s sensitivities.

Note re comments on this blog. Comments are moderated to protect against spam. There may be some delay before they appear; if this is a concern, please email me. I generally publish comments provided they are on topic, coherent and not libellous.

Tuesday, 12 April 2022

Book Review. Fiona Fox: Beyond the Hype

If you're a scientist reading this, you may well think, as I used to, that running a Science Media Centre (SMC) would be a worthy but rather dull existence. Surely, it's just a case of getting scientists to explain things clearly in non-technical language to journalists. The fact that the SMC was created in part as a response to the damaging debacle of the MMR scandal might suggest that it would be a straightforward job of providing journalists with input from experts rather than mavericks, and helping them distinguish between the two. 

I now know it's not like that, after being on the Science Media Centre's panel of experts for many years, and having also served on their advisory committee for a few of them. The reality is described in this book by SMC's Director, Fiona Fox, and it's riveting stuff.

In part this is because no science story is simple. People will disagree about the quality of the science, the meaning of the results, and the practical implications. Topics such as climate change, chronic fatigue syndrome/ME and therapeutic cloning elicit highly charged responses from those who are affected by the science. More recently, we have found that when a pandemic descends upon the world, some of the bitterest disagreements are not between scientists and the media, but between well-respected, expert scientists. The idea that scientists can hand down tablets of stone inscribed with the truth to the media is a fiction that is clearly exposed in this book.

Essentially, the SMC might be seen as acting like a therapist in the midst of a seriously dysfunctional family where everyone misunderstands everyone else, and everyone wants different things out of life. On the one hand we have the scientists. They get frustrated because they feel they should be able to make exciting new discoveries, with the media then helping communicate these to the world. Instead, they complain that the media has two responses: either they're not interested in the science, or they want to sensationalise it. If you find a mild impact of grains on sexual behaviour in rats, you'll find it translated into the headline 'Cornflakes make you impotent'.

On the other hand, we have the media. They want a good story, but find that the scientists are reluctant to talk to them, or want total control of how the story is presented. In the worst case, scientists are prima donnas who want days or weeks to prepare for a media interview and will then shower the journalist with detailed information that is incomprehensible, irrelevant, or both. When the public desperately needs a clear, simple message, the scientists will refuse to deliver it, hedging every statement.

Fox has worked over the years to challenge these stereotypes: journalists do want a good story, but the serious science journalists want a true story, and are glad of the opportunity to pose questions directly to scientists. And many scientists do a fantastic job of explaining their subject matter to a non-specialist audience. In the varied chapters of the book, Fox is an irrepressible optimist, who keeps coming back to the importance of having scientists communicating directly with the media. Her optimism is not founded in ignorance: she knows exactly how messy and complicated science can be. But she persists in believing that more good is done by communicating what we know, warts and all, rather than pretending that uncertainties and disagreements do not exist.

The role of the SMC is, however, complicated by further factions. The dramatis personae includes two other groups. First, there are science press officers, who are appointed by institutions to help scientists promote their work, and then there are government officials and civil servants, who are concerned with policy implications of science.

In her penultimate chapter, Fox bemoans the fact that the traditional press officer - passionate about science and viewing themselves as "purveyors of truth and accuracy" - is a dying breed. There remain notable exceptions, but all too often science communication has become conflated with a public relations role: pushing a corporate message, defending the institutional reputation, and even using scientific discoveries as a marketing tool. Fox notes a 2014 survey of exaggerated science reports in the media that concluded: "Exaggeration in news is strongly associated with exaggeration in press releases." I had been one of those scientists who thought the media were mostly to blame for over-hyped science reporting, but this study showed that journalists are often recycling exaggerated accounts handed to them by those speaking for the scientists.

But the problems posed by scientists, journalists and press officers are trivial compared to the obstacles created by those involved in policy. They want to use science when convenient, but also want to exert control over which aspects of science policy gets talked about. Scientists working for government-funded organisations are often muzzled, with explicit instructions not to talk to the media. One can see that this cautious approach, attempting to control the message and keep things simple, puts many civil servants and government scientists on a collision course with Fox, whose view is: "Explaining preliminary and contradictory science is messy: that should not be seen as a failure of communications".

A refreshing aspect of Fox's account is that she does not brush aside the occasions when the SMC - or she personally - may have handled a situation badly. Of course, it's easy to point the finger of blame when something does go horribly wrong, and Fox has come under fire on many occasions. Rather than being defensive, she accepts that things might have been done differently, while at the same time explaining the logic of the decisions that were taken. This is in line with my memories of meetings of the SMC advisory committee, where there were frequent post mortems - "this is how we handled it; this is how it turned out; should we have done it differently?" - with frank discussions from the committee members. When you are working in contentious areas where things are bound to blow up every now and again, this is a sensible strategy that helps the organisation learn and develop. I'm glad that after 20 years, the ethos of the SMC is still very much on the side of open, transparent communication between scientists and the media.  


Fox, Fiona (2022) Beyond the Hype: The Inside Story of Science's Biggest Media Controversies. London: Elliott and Thompson Ltd.


Saturday, 10 August 2019

A day out at 10 Downing Street

Yesterday, I attended a meeting at 10, Downing Street with Dominic Cummings, special advisor to Boris Johnson, for a discussion about science funding. I suspect my invitation will be regarded, in hindsight, as a mistake, and I hope some hapless civil servant does not get into trouble over it. I discovered that I was on the invitation list because of a recommendation by the eminent mathematician, Tim Gowers, who is someone who is venerated by Cummings. Tim wasn't able to attend the meeting, but apparently he is a fan of my blog, and we have bonded over a shared dislike of the evil empire of Elsevier.  I had heard that Cummings liked bold, new ideas, and I thought that I might be able to contribute something, given that science funding is something I have blogged about. 

The invitation came on Tuesday and, having confirmed that it was not a spoof, I spent some time reading Cummings' blog, to get a better idea of where he was coming from. The impression is that he is besotted with science, especially maths and technology, and impatient with bureaucracy. That seemed promising common ground.

The problem, though, is that as a major facilitator of Brexit in 2016, who is now persisting with the idea that Brexit must be achieved "at any cost", he is doing immense damage, because science transcends national boundaries. Don't just take my word for it: it's a message that has been stressed by the President of the Royal Society, the Government's Chief Scientific Advisor, the Chair of the Wellcome Trust, the President of the Academy of Medical Sciences, and the Director of the Crick Institute, among others. 

The day before the meeting, I received an email to say that the topic of discussion would be much narrower than I had been led to believe. The other invitees were four Professors of Mathematics and the Director of the Engineering and Physical Sciences Research Council. We were sent a discussion document written by one of the professors outlining a wish list for improvements in funding for academic mathematics in the UK. I wasn't sure if I was a token woman: I suspect Cummings doesn't go in for token women and that my invite was simply because it had been assumed that someone recommended by Gowers would be a mathematician. I should add that my comments here are in a personal capacity and my views should not be taken as representing those of the University of Oxford.

The meeting started, rather as expected, with Cummings saying that we would not be talking about Brexit, because "everyone has different views about Brexit" and it would not be helpful. My suspicion was that everyone around the table other than Cummings had very similar views about Brexit, but I could see that we'd not get anywhere arguing the point. So we started off feeling rather like a patient who visits a doctor for medical advice, only to be told "I know I just cut off your leg, but let's not mention that."

The meeting proceeded in a cordial fashion, with Cummings expressing his strong desire to foster mathematics in British universities, and asking the mathematicians to come up with their "dream scenario" for dramatically enhancing the international standing of their discipline over the next few years. As one might expect, more funding for researchers at all levels, longer duration of funding, plus less bureaucracy around applying for funding were the basic themes, though Brexit-related issues did keep leaking in to the conversation – everyone was concerned about difficulties of attracting and retaining overseas talent, and about loss of international collaborations funded by the European Research Council. Cummings was clearly proud of the announcement on Thursday evening about easing of visa restrictions on overseas scientists, which has potential to go some way towards mitigating some of the problems created by Brexit. I felt, however, that he did not grasp the extent to which scientific research is an international activity, and breakthroughs depend on teams with complementary skills and perspectives, rather than the occasional "lone genius".  It's not just about attracting "the very best minds from around the world" to come and work here.

Overall, I found the meeting frustrating. First, I felt that Cummings was aware that there was a conflict between his twin aims of pursuit of Brexit and promotion of science, but he seemed to think this could be fixed by increasing funding and cutting regulation. I also wonder where on earth the money is coming from. Cummings made it clear that any proposals would need Treasury approval, but he encouraged the mathematicians to be ambitious, and talked as if anything was possible. In a week when we learn the economy is shrinking for the first time in years, it's hard to believe he has found the forest of magic money trees that are needed to cover recent spending announcements, let alone additional funding for maths.

Second, given Cummings' reputation, I had expected a far more wide-ranging discussion of different funding approaches. I fully support increased funding for fundamental mathematics, and did not want to cut across that discussion, so I didn't say much. I had, however, expected a bit more evidence of creativity. In his blog, Cummings refers to the Defense Advanced Research Projects Agency (DARPA), which is widely admired as a model for how to foster innovation. DARPA was set up in 1958 with the goal of giving the US superiority in military and other technologies. It combined blue-skies and problem-oriented research, and was immensely successful, leading to the development of the internet, among other things. In his preamble, Cummings briefly mentioned DARPA as a useful model. Yet, our discussion was entirely about capacity-building within existing structures.

Third, no mention was made of problem-oriented funding. Many scientists dislike having governments control what they work on, and indeed, blue-skies research often generates quite unexpected and beneficial outcomes. But we are in a world with urgent problems that would benefit from focussed attention of an interdisciplinary, and dare I say it, international group of talented scientists. In the past, it has taken world wars to force scientists to band together to find solutions to immediate threats. The rapid changes in the Arctic suggest that the climate emergency should be treated just like a war - a challenge to be tackled without delay. We should be deploying scientists, including mathematicians, to explore every avenue to mitigating the effects of global heating – physical and social – right now. Although there is interesting research on solar geoengineering going on at Harvard, it is clear that, under the Trump administration, we aren't going to see serious investment from the USA in tackling global heating. And, in any case, a global problem as complex as climate needs a multi-pronged solution. The economist Marianna Mazzucato understands this: her proposals for mission-oriented research take a different approach to the conventional funding agencies we have in the UK. Yet when I asked whether climate research was a priority in his planning, Cummings replied "it's not up to me". He said that there were lots of people pushing for more funding for research on "climate change or whatever", but he gave the impression that it was not something he would give priority to, and he did not display a sense of urgency. That's surprising in someone who is scientifically literate and has a child.

In sum, it's great that we have a special advisor who is committed to science. I'm very happy to see mathematics as a priority funding area. But I fear Dominic Cummings overestimates the extent to which he can mitigate the negative consequences of Brexit, and it is particularly unfortunate that his priorities do not include the climate emergency that is unfolding.

Sunday, 26 August 2018

Should editors edit reviewers?


How Einstein dealt with peer review: from http://theconversation.com/hate-the-peer-review-process-einstein-did-too-27405

This all started with a tweet from Jesse Shapiro under the #shareyourrejections hashtag:

JS: Reviewer 2: “The best thing these authors [me and @ejalm] could do to benefit this field of study would be to leave the field and never work on this topic again.” Paraphrasing only slightly.

This was quickly followed by another example;
Bill Hanage: #ShareYourRejections “this paper is not suitable for publication in PNAS, or indeed anywhere.”

Now, both of these are similarly damning, but there is an important difference. The first one criticises the authors, the second one criticises the paper. Several people replied to Jesse’s tweet with sympathy, for instance:

Jenny Rohn: My condolences. But Reviewer 2 is shooting him/herself in the foot - most sensible editors will take a referee's opinion less seriously if it's laced with ad hominem attacks.

I took a different tack, though:
DB: A good editor would not relay that comment to the author, and would write to the reviewer to tell them it is inappropriate. I remember doing that when I was an editor - not often, thankfully. And reviewer apologised.

This started an interesting discussion on Twitter:

Ben Jones: I handled papers where a reviewer was similarly vitriolic and ad hominem. I indicated to the reviewer and authors that I thought it was very inappropriate and unprofessional. I’ve always been very reluctant to censor reviewer comments, but maybe should reconsider that view

DB: You're the editor. I think it's entirely appropriate to protect authors from ad hominem and spiteful attacks. As well as preventing unnecessary pain to authors, it helps avoid damage to the reputation of your journal

Chris Chambers: Editing reviews is dangerous ground imo. In this situation, if the remainder of the review contained useful content, I'd either leave the review intact but inform the authors to disregard the ad hom (& separately I'd tell reviewer it's not on) or dump the whole review.

DB: I would inform reviewer, but I don’t think it is part of editor’s job to relay abuse to people, esp. if they are already dealing with pain of rejection.

CC: IMO this sets a dangerous precedent for editing out content that the editor might dislike. I'd prefer to keep reviews unbiased by editorial input or drop them entirely if they're junk. Also, an offensive remark or tone could in some cases be embedded w/i a valid scientific point.

Kate Jeffery: I agree that editing reviewer comments without permission is dodgy but also agree that inappropriate comments should not be passed back to authors. A simple solution is for editor to revise the offending sentence(s) and ask reviewer to approve change. I doubt many would decline.

A middle road was offered by Lisa deBruine:
LdB: My solution is to contact the reviewer if I think something is wrong with their review (in either factual content or professional tone) and ask them to remove or rephrase it before I send it to the authors. I’ve never had one decline (but it doesn’t happen very often).

I was really surprised by how many people felt strongly that the reviewer’s report was in some sense sacrosanct and could and should not be altered. I’ve pondered this further, but am not swayed by the arguments.

I feel strongly that editors should be able to distinguish personal abuse from robust critical comment, and that, far from being inappropriate, it is their duty to remove the former from reviewer reports. And as for Chris’s comment: ‘an offensive remark or tone could in some cases be embedded w/i a valid scientific point’ – the answer is simple. You rewrite to remove the offensive remark; e.g. ‘The authors’ seem clueless about the appropriate way to run a multilevel model’ could be rewritten to ‘The authors should take advice from a statistician about their multilevel model, which is not properly specified’. And to be absolutely clear, I am not talking about editing out comments that are critical of the science, or which the editor happens to disagree with. If a reviewer got something just plain wrong, I’m okay with giving a clear steer in the editor’s letter, e.g.: ‘Reviewer A suggests you include age as a covariate. I notice you have already done that in the analysis on p x, so please ignore that comment.’ I am specifically addressing comments that are made about the authors rather than the content of what they have written. A good editor should find that an easy distinction to make. From the perspective of an author, being called out for getting something wrong is never comfortable: being told you are a useless person because you got something wrong just adds unnecessary pain.

Why do I care about this? It’s not just because I think we should all be kind to each other (though, in general, I think that’s a good idea). There’s a deeper issue at stake here. As editors, we should work to reinforce the idea that personal disputes should have no place in science. Yes, we are all human beings, and often respond with strong emotions to the work of others. I can get infuriated when I review a paper where the authors appear to have been sloppy or stupid. But we all make mistakes, and are good at deluding ourselves. One of the problems when you start out is that you don’t know what you don’t know: I learned a lot from having my errors pointed out by reviewers, but I was far more likely to learn from this process if the reviewer did not adopt a contemptuous attitude. So, as reviewers, we should calm down and self-edit, and not put ad hominem comments in our reviews. Editors can play a role in training reviewers in this respect.

For those who feel uncomfortable with my approach - i.e. edit the review and tell reviewer why you have done so – I would recommend Lisa de Bruine’s solution of raising the issue with the reviewer and asking them to amend their review. Indeed, in today’s world where everything is handled by automated systems, that may be the only way of ensuring that an insulting review does not go to the author (assuming the automated system lets you do that!).

Finally, as everyone agreed that, this this does not seem to be a common problem, so perhaps not worth devoting much space to, but I'm curious to know how other editors respond to this issue.

Saturday, 17 June 2017

Prospecting for kryptonite: the value of null results


-->  
This blogpost doesn't say anything new – it just uses a new analogy (at least new to me) to make a point about the value of null results from well-designed studies. I was thinking about this after reading this blogpost by Anne Scheel.

Think of science like prospecting for kryptonite in an enormous desert. There's a huge amount of territory out there, and very little kryptonite. Suppose also that the fate of the human race depends crucially on finding kryptonite deposits.

Most prospectors don't find kryptonite. Not finding kryptonite is disappointing: it feels like a lot of time and energy has been wasted, and the prospector leaves empty-handed. But the failure is nonetheless useful. It means that new prospectors won't waste their time looking for kryptonite in places where it doesn't exist.  If, however, someone finds kryptonite, everyone gets very excited and there is a stampede to rush to the spot where it was discovered.

Contemporary science works a bit like this, except that the whole process is messed up by reporting bias and poor methods which lead to false information.

To take reporting bias first: suppose the prospector who finds nothing doesn't bother to tell anyone. Then others may come back to the same spot and waste time also finding nothing. Of course, some scientists are like prospectors in that they are competitive and would like to prevent other people from getting useful information. Having a competitor bogged down in a blind alley may be just what they want for their rivals. But where there is an urgent need for new discovery, there needs to be a collaborative rather than competitive approach, to speed up discovery and avoid waste of scarce funds. In this context, null results are very useful.

False information can come from the prospector who declares there is no kryptonite on the basis of a superficial drive through a region. This is like the researcher who does an underpowered study that gets an inconclusive null result. It doesn't allow us to map out the region with kryptonite-rich and kryptonite-empty areas – it just leaves us having to go back and look again more thoroughly. Null results from poorly designed studies are not much use to anyone.

But the worst kind of false information is fool's kryptonite: someone declares they have found kryptonite, but they haven't. So everyone rushes off to that spot to try and find their own kryptonite, only to find they have been deceived. So there are a lot of wasted resources and broken hearts. For a prospector who has been misled in this way, this situation is worse than just not finding any kryptonite, because their hopes have been raised and they may have put a disproportionate amount of effort and energy into pursuing the false information.

Pre-registering a study is the equivalent of a prospectors declaring publicly that they are doing a comprehensive survey of a specific region, and will declare what they have found, so that the map can gradually be filled in, with no duplication of effort.

Some will say, what about exploratory research? Of course the prospector may hit lucky and find some other useful mineral that nobody had anticipated. If so, that's great, and it may even turn out more important than kryptonite. But the point I want to stress is that the norm for most prospectors is that they won't find kryptonite or anything else. Really exciting findings occur rarely, yet our current incentive structures create the impression that you have to find something amazing to be valued as a scientist.  It would make more sense to reward those who do a good job of prospecting, producing results that add to our knowledge and can be built upon.

I'll leave the last word to Ottoline Leyser, who in an interview for The Life Scientific said: "There's an awful lot of talk about ground-breaking research…. Ground-breaking is what you do when you start a building. You go into a field and you dig a hole in the ground. If you're only rewarded for ground-breaking research, there's going to be a lot of fields with a small hole in, and no buildings."




Saturday, 1 October 2016

On the incomprehensibility of much neurogenetics research


Together with some colleagues, I am carrying out an analysis of methodological issues such as statistical power in papers in top neuroscience journals. Our focus is on papers that compare brain and/or behaviour measures in people who vary on common genetic variants.

I'm learning a lot by being forced to read research outside my area, but I'm struck by how difficult many of these papers are to follow. I'm neither a statistician nor a geneticist, but I have nodding acquaintance with both disciplines, as well as with neuroscience, yet in many cases I find myself struggling to make sense of what researchers did and what they found. Some papers that have taken hours of reading and re-reading to just get at the key information that we are seeking for our analysis, i.e. what was the largest association that was reported.

This is worrying for the field, because the number of people competent to review such papers will be extremely small. Good editors will, of course, try to cover all bases by finding reviewers with complementary skill sets, but this can be hard, and people will be understandably reluctant to review a highly complex paper that contains a lot of material beyond their expertise.  I remember a top geneticist on Twitter a while ago lamenting that when reviewing papers they often had to just take the statistics on trust, because they had gone beyond the comprehension of all but a small set of people. The same is true, I suspect, for neuroscience. Put the two disciplines together and you have a big problem.

I'm not sure what the solution is. Making raw data available may help, in that it allows people to check analyses using more familiar methods, but that is very time-consuming and only for the most dedicated reviewer.

Do others agree we have a problem, or is it inevitable that as things get more complex the number of people who can understand scientific papers will contract to a very small set?

Saturday, 11 July 2015

Publishing replication failures: some lessons from history


I recently travelled to Lismore, Ireland, to speak at the annual Robert Boyle summer school. I had been intrigued by the invitation, as it was clear this was not the usual kind of scientific meeting. The theme of Robert Boyle, who was born in Lismore Castle, was approached from very different angles, and those attending included historians of science, scientists, journalists, as well as interested members of the public. We were treated to reconstructions of some of Boyle's livelier experiments, heard wonderful Irish music, and we celebrated the installation of a plaque at Lismore Castle to honour Katherine Jones, Boyle's remarkable sister, who was also a scientist.

My talk was on the future of scientific scholarly publication, a topic that the Royal Society had explored in a series of meetings to celebrate the 350th Anniversary of the publication of Philosophical Transactions. I'm particularly interested in the extent to which current publishing culture discourages good science, and I concluded by proposing the kind of model that I recently blogged about, where the traditional science journal is no longer relevant to communicating science.

What I hadn't anticipated was the relevance of some of Boyle's writing to such contemporary themes.

Boyle, of course, didn't have to grapple with issues such as the Journal Impact Factor or Open Access payments. But some of the topics he covered are remarkably contemporary. He would have been interested in the views of Jason Mitchell, John L. Loeb Associate Professor of the Social Sciences at Harvard, who created a stir last year by writing a piece entitled "On the emptiness of failed replications". I see that the essay has now been removed from the Harvard website, but the main points can be found here*. It was initially thought to be a parody, but it seems to have been a sincere attempt at defending the thesis that "unsuccessful experiments have no meaningful scientific value." Furthermore, according to Mitchell, "Whether they mean to or not, authors and editors of failed replications are publicly impugning the scientific integrity of their colleagues." I have taken issue with this standpoint in an earlier blogpost; my view is that we should not assume that a failure to replicate a result is due to fraud or malpractice, but rather should encourage replication attempts as a means of establishing which results are reproducible.

I am most grateful to Eoin Gill of Calmast for pointing me to Robert Boyle's writings on this topic, and for sending me transcripts of the most relevant bits. Boyle has two essays on "the Unsuccessfulness of Experiments" in a collection of papers entitled “Certain Physiological Essays and other Tracts”. In these he discusses (at inordinate length!) the problems that arise when an experimental result fails to replicate. He starts by noting that such unsuccessful experiments are not uncommon:
… in the serious and effectual prosecution of Experimental Philosophy, I must add one discouragement more, which will perhaps as much surprize you as dishearten you; and it is, That besides that you will find …… many of the Experiments publish'd by Authors, or related to you by the persons you converse with, false or unsuccessful, … you will meet with several Observations and Experiments, which though communicated for true by Candid Authors or undistrusted Eye-witnesses, or perhaps recommended to you by your own experience, may upon further tryal disappoint your expectation, either not at all succeeding constantly, or at least varying much from what you expected. (opening passage)
He is interested in exploring the reasons for such failure; his first explanation seems equivalent to one that those using statistical analyses are all too familiar with – a chance false positive result.
And that if you should have the luck to make an Experiment once, without being able to perform the same thing again, you might be apt to look upon such disappointments as the effects of an unfriendliness in Nature or Fortune to your particular attempts, as proceed but from a secret contingency incident to some experiments, by whomsoever they be tryed. (p. 44)
And he urges the reader not to be discouraged – replication failures happen to everyone!
…. though some of your Experiments should not always prove constant, you have divers Partners in that infelicity, who have not been discouraged by it. (p. 44)
He identifies various possible systematic reasons for such failure: a problem with skill of the experimenter, with purity of ingredients, or variation in the specific context in which the experiment is conducted. He even, implicitly, addresses statistical power, noting how one needs many observations to distinguish what is general from individual variation.
…the great variety in the number, magnitude, position, figure, &c. of the parts taken notice of by Anatomical Writers in their dissections of that one Subject the humane body, about which many errors would have been delivered by Anatomists, if the frequency of dissections had not enabled them to discern betwixt those things that are generally and uniformly found in dissected bodies, and those which are but rarely, and (if I may so speak) through some wantonness or other deviation of Nature, to be met with. (p. 94)
Because of such uncertainties, Boyle emphasises the need for replication, and the dangers of building complex theory on the basis of a single experiment:
….try those Experiments very carefully, and more than once, upon which you mean to build considerable Superstructures either theorical or practical, and to think it unsafe to rely too much upon single Experiments, especially when you have to deal in Minerals: for many to their ruine have found, that what they at first look'd upon as a happy Mineral Experiment has prov'd in the issue the most unfortunate they ever made. (p. 106)
I'm sure there are some modern scientists who must be thinking their lives may have been made much easier if they had heeded this advice. But perhaps the most relevant to the modern world, where there is such concern about the consequences of failure to replicate, are Boyle's comments on the reputational impact of publishing irreproducible results:
…if an Author that is wont to deliver things upon his own knowledge, and shews himself careful not to be deceived, and unwilling to deceive his Readers, shall deliver any thing as having try'd or seen it, which yet agrees not with our tryals of it; I think it but a piece of Equity, becoming both a Christian and a Philosopher, to think (unless we have some manifest reason to the contrary) that he set down his Experiment or Observation as he made it, though for some latent reason it does not constantly hold; and that therefore though his Experiment be not to be rely'd upon, yet his sincerity is not to be rejected. Nay, if the Author be such an one as has intentionally and really deserved well of Mankind, for my part I can be so grateful to him, as not only to forbear to distrust his Veracity, as if he had not done or seen what he says he did or saw, but to forbear to reject his Experiments, till I have tryed whether or no by some change of Circumstances they may not be brought to succeed. (p. 107)
The importance of fostering a 'no blame' culture was one theme that emerged in a recent meeting on Reproducibility and Reliability of Biomedical Research at the Academy of Medical Sciences. It seems that in this, as in so many other aspects of science, Boyle's views are well-suited to the 21st century.

For more on Robert Boyle, see here


12th July 2015: Thanks to Daniël Lakens who pointed me to the Wayback machine, where earlier versions of the article can be found:   http://web.archive.org/web/*/http://wjh.harvard.edu/~jmitchel/writing/failed_science.htm

Monday, 20 April 2015

How long does a scientific paper need to be?





©CartoonStock.com

There was an interesting exchange last week on PubMedCommons between Maurice Smith, senior author of a paper on motor learning, and Bjorn Brembs, a neurobiologist at the University of Regensburg. The main thrust of Brembs' critique was that the paper, which was presented as surprising, novel and original, failed adequately to cite the prior literature. I was impressed that Smith engaged seriously with the criticism, writing a reasoned defence of the choice of material in the literature review, and noting that claims of over-hyped statements were based on selective citation.  What really caught my attention was the following statement in his rebuttal: "We can reassure the reader that it was very painful to cut down the discussion, introduction, and citations to conform to Nature Neuroscience’s strict and rather arbitrary limits. We would personally be in favor of expanding these limits, or doing away with them entirely, but this is not our choice to make."
As it happens, this comment really struck home with me, as I had been internally grumbling about this very issue after a weekend of serious reading of background papers for a grant proposal I am preparing. I repeatedly found evidence that length limits were having a detrimental effect on scientific reporting. I think there are three problems here.
1. The first is exemplified by the debate around the motor learning paper. I don't know this area well enough to evaluate whether omissions in the literature review were serious, but I am all too familiar with papers in my own area where a brief introduction skates over the surface of past work. One feels that length limits play a big part in this but there is also another dimension: To some editors and reviewers, a paper that starts documenting how the research builds on prior work is at risk of being seen as merely 'incremental', rather than 'groundbreaking'. I was once explicitly told by an editor that too high a proportion of my references were more than five years old. This obsession with novelty is in danger of encouraging scientists to devalue serious scholarship as they zoom off to find the latest hot topic.  
2. In many journals, key details of methods are relegated to a supplement, or worse still, omitted altogether. I know that many people rejoiced when the Journal of Neuroscience declared it would no longer publish supplementary material: I thought it was a terrible decision. In most of the papers I read, the methodological detail is key to evaluating the science, and if we only get the cover story of the research, we can be seriously misled. Yes, it can be tedious to wade through supplementary material, but if it is not available, how do we know the work is sound?
3. The final issue concerns readability. One justification for strict length limits is that it is supposed to benefit readers if the authors write succinctly, without rambling on for pages and pages.  And we know that the longer the paper, the fewer people will even begin to read it, let alone get to the end. So, in principle, length limits should help. But in practice they often achieve the opposite effect, especially if we have papers reporting several experiments and using complex methods. For instance, I recently read a paper that reported, all within the space of a single Results section about 2000 words long, (a) a genetic association analysis; (b) replications of the association analysis on five independent samples (c) a study of methylation patterns; (d) a gene expression study in mice; and (e) a gene expression study in human brains. The authors had done their best to squeeze in all essential detail, though some was relegated to supplemental material, but the net result was that I came away feeling as if I had been hit around the head by a baseball bat. My sense was that the appropriate format for reporting such a study would have been a monograph, where each component of the study could be given a chapter, but of course, that would not have the kudos of a publication in a high impact journal, and arguably fewer people would read it.
Now that journals are becoming online-only, a major reason for imposing length limits – cost of physical production and distribution of a paper journal – is far less relevant. Yes, we should encourage authors to be succinct, but not so succinct that scientific communication is compromised.