Wednesday, 13 May 2020

Manipulated images: hiding in plain sight?


Many years ago, I took a taxi from Manchester Airport to my home in Didsbury. It’s a 10 minute drive, but the taxi driver took me on a roundabout route that was twice as long. I started to query this as we veered off course, and was given a rambling story about road closures. I paid the fare but took a note of his details. Next day, having confirmed that there were no road closures, I wrote to complain to Manchester City Council.  I was phoned up by a man from the council who cheerfully told me that this driver had a record of this kind of thing, but not to worry, he’d be made to refund me by sending me a postal order for the difference in correct fare and what I’d paid. He sounded quite triumphant about this, because, as he explained, it would be tedious for the driver to have to go to a Post Office.

What on earth does this have to do with manipulated images? Well, it’s a parable for what happens when scientists are found to have published papers in which images with crucial data have been manipulated. It seems that typically, when this is discovered, the only consequence for the scientists is that they are required to put things right. So, just as with the taxi driver, there is no incentive for honesty. If you get caught out, you can just make excuses (oh, I got the photos mixed up), and your paper might have a little correction added. This has been documented over and over again by Elisabeth Bik: you can hear a compelling interview with her on the Everything Hertz podcast here.

There are two things about this that I just don’t get. First, why do people take the risk? I work with data in the form of numbers rather than images, so I wonder if I missing something. If someone makes up numbers, that can be really hard to detect (though there are some sleuthing methods available). But if you publish a paper with manipulated images, the evidence of the fraud is right there for everyone to see. In practice, it was only when Bik appeared on the scene, with her amazing ability to spot manipulated images, that the scale of the problem became apparent (see note below). Nevertheless, I am baffled that scientists would leave such a trail of incriminating evidence in their publications, and not worry that at some future date, they’d be found out.

But I guess the answer to this first question is contained within the second: why isn’t image manipulation taken more seriously? It’s depressing to read how time after time, Bik has contacted journals to point out irregularities in published images only to be ignored. The minority of editors who do decide to act behave like Manchester City Council: the authors have to put the error right, but it seems there are no serious consequences. And meanwhile, like many whistleblowers, far from being thanked for cleaning up science, Elisabeth has suffered repeated assaults on her credibility and integrity from those she has offended.

This week I saw the latest tale in this saga: Bik tweeted about a paper published in Nature that was being taken seriously in relation to treatment for coronavirus. Something in me snapped and I felt it was time to speak out. Image manipulation is fraud. If authors are found to have done it, the paper should be retracted and they should be banned from publishing in that journal in future. I call on the ‘high impact’ journals such as Nature to lead the way in implementing such a policy. I’d like to see some sanctions from institutions and funders as well, but I’ve learned that issues like this need a prolonged campaign to achieve small goals.

I’d be the first to argue that scientists should not be punished for honest errors (see this paper, or free preprint version). It's important to recognise that we are all fallible and prone to make mistakes. I can see how it is possible that someone might mix up two images, for instance. But in many of the cases detected by Elisabeth, part of one image is photoshopped into another, and then resized or rotated. I can’t see how this can be blamed on honest error. The only defence that seems left for the PI is to blame a single rogue member of the lab. If someone they trust is cooking the data, an innocent PI could get implicated in fraud unwittingly. But the best way to avoid that is to have a lab culture in which honesty and integrity are valued above Nature papers. And we’ll only see such a culture become widespread if malpractice has consequences.

Hiding in Plain Sight’ is a book by Sarah Kendzior that covers overt criminality in the US political scene, which the author describes as ‘a transnational crime syndicate masquerading as a government’. The culture can be likened to that seen in some areas of high-stakes science. The people who manipulate figures don’t worry about getting found out, because they achieve fame and grants, with no apparent consequences, even when the fraud is detected.

Notes (14th May 2020)
1. Coincidentally, a profile of Elisabeth Bik appeared in Nature the same day as this blogpost https://www.nature.com/articles/d41586-020-01363-z
2. Correction: Both Elisabeth Bik and Boris Barbour (comment below) pointed out that she was not the first to investigate image manipulation:

5 comments:

  1. Although Elisabeth has done Herculean work and has brought the issue to the attention of many, it's not really fair to say she was the first to highlight the issue. Unless she is the person behind the pseudonym "Clare Francis" familiar to many editors and research integrity officers around the world. A huge number of cases of manifest figure manipulation can now be found in the PubPeer database (started in 2012). Although Elisabeth is a very significant contributor, she is by no means the only one. Retraction Watch, Paul Brookes are other "pioneers" predating PubPeer who refused to accept this situation.

    Your anger at the impunity of these frauds is justified. Still only a tiny fraction of obvious problems in the PubPeer database lead to any visible action, be it from authors, journals or institutions. As you point out, journals accept deeply unconvincing corrections while studiously avoiding asking hard questions about just how an image could become so manipulated. Journals are of course subject to acute conflicts of interest. They are supposed to follow COPE guidelines, but they are a mess. The main retraction guideline requires "clear evidence that the findings are unreliable", which the journals seem to feel means "beyond reasonable doubt":

    https://publicationethics.org/retraction-guidelines

    However, a more specific guideline - cowritten by Springer Nature - essentially imposes that any image manipulation should lead to an automatic retraction:

    https://publicationethics.org/resources/flowcharts/what-do-if-you-suspect-image-manipulation-published-article

    There is a weaselly bit about allowing a correction if the "manipulation is very minor", but nobody ever dares explain why the manipulation is "very minor". Needless to say, the journals (including Nature) largely ignore this more explicit guideline.

    I recognised your title. Research feels relatively insignificant in the political tides sweeping us away at the moment, but we scientists can still ensure that we plough our own furrow straight. Contrary to many activities, research is quite democratic and a small amount of collective action can be surprisingly effective. It is easy to pick off one person complaining by characterising them as a crank, a failed scientist, etc. But if five or ten express the same opinion publicly, that becomes very difficult to ignore. I've long argued that evaluations should take a severe view of low-quality work and of course of fraud. Until it becomes a career negative for people publish crap they will continue to do so.

    ReplyDelete
  2. Re the PI blaming a rogue member of the lab (often a grad student, probably long-departed, perhaps without leaving a forwarding address, and we're not quite sure how they spelt their last name): We need a culture like American corporate law, which has the notion that if a company commits fraud or some other crime, the CEO can be held responsible on the basis that they either "knew or should have known" what was going on. PIs should be able to satisfy themselves that there is no malfeasance going on in their lab, and they should be held responsible if it does. Let's face it, in most cases they are indeed responsible for setting the tone of the research culture, either overtly ("I want results, dammit, and your work visa is up for renewal next month") or implicitly ("I'm far too busy jetting round the world promoting my new popular book to spend time checking that the $4 million of the taxpayers' money that my lab gets every year is spent properly").

    ReplyDelete
  3. I agree with you. But there are bigger problems with this paper, problems of a technical nature that render it truly objectionable and scientifically without merit. Nothing can be drawn from it. I have put a detailed critique into the comments section today - first it was declared (unbeknownst to me) as SPAM , then still 6hrs later - it is still 'pending'. Does somebody have something to hide here?. I don't want to litter your blog with it, but given that I have criticized editors for not listening to referees in my critique it is astonishing to me what kind of 'filters there are or might be. ...cheers, Georgy Koentges, Prof of Biomedicine and Evolution, University of Warwick.

    ReplyDelete
  4. Hello, I agree with you but there are much bigger problems with this paper than the faked images (of low value anyway). Have left a critique in the comments section, still not published 6+hrs later - who has something to hide here? best wishes, Georgy Koentges, Prof of Biomedicine and Evolution, U o Warwick

    ReplyDelete
  5. In the early 80s, my parents and their friends were using optical recording of voltage sensitive dyes, which is another image-as-data kind of method. My mom talks about regular audits of all students’ work as part of their normal course of business. Even as independent scientists, they had carbon paper journals. When I completed my PhD in SLP essentially zero similar oversight existed, which totally shocked them. I still have essentially 100% of my 6 years of data in my house and have never been asked to produce it for anyone except in manuscript form. That means my hipaa compliance (I’m in the US) and my integrity are totally “don’t ask don’t tell.” I don’t know where the actual oversight went. It worries me because they did catch students by looking then, but somewhere along the way it seems someone decided it wasn’t worth the effort anymore. Now the field will face the consequences, while the authors don’t. Behavioral expectations and oversight need to be a real part of training again.

    ReplyDelete