Tuesday, 23 November 2021

Linking responsibility for climate refugees to emissions

 


I followed the COP summit in Glasgow with interest, particularly via Channel 4 News, which is good at reporting minority voices. There were incredibly powerful speeches by people whose homes are going to be uninhabitable if we exceed 1.5 degrees warming - and indeed by some for whom extreme weather has already caused famine. For examples see speeches by Elizabeth Wathuti, youth climate activist from Kenya, Mia Mottley, Prime Minister of Barbados, and the interview with Shauna Aminath, minister of environment for the Maldives, and Seve Paeniu, Tuvalu’s minister for finance and climate change. 

The plan at COP seemed to focus on decelerating warming in order to, as far as possible, prevent this happening. But it's pretty clear that, even in the unlikely event that we hold warming to 1.5 degrees, there will be enormous numbers of people who will become climate refugees. People don't want to be forced out of their homes. It makes far more sense to adopt strategies that will avoid that than to plan for mass migrations. But sadly, human beings have a habit of not believing the worse until it is upon them. So I think we're going to have mass migrations and I'm worried nobody seems to be planning for that.

The reaction of many wealthy countries to refugees has been to keep them out at all costs and hope they go away. People fleeing their homelands are regarded like toxic waste: a problem to be exported. That pretty much sums up the attitude of the current UK government, who have at various times suggested that asylum seekers be sent to Ascension Island, confined on ship hulks  and, most recently, and to the surprise of the Albanians, sent to Albania. But this problem is not going to go away. We will see increasing numbers of people roaming around desperately trying to find somewhere to live. Quite apart from the humanitarian disaster for the refugees themselves, this will create social unrest and conflict for those in wealthy countries unless it is planned for. Germany has shown that it is possible to integrate large numbers of refugees, and that they can contribute positively to a society if given support.

So, as well as anticipating the impacts of global heating on the physical environment, we need to be planning for mass migrations of people who have lost their homeland. We will need a global strategy, where everyone plays their part. And it should be those countries who are are doing least to curb global emissions who should step up to play their part in taking in refugees. 

So I suggest that the next COP summit should aim for a more direct linkage between emissions generated and responsibility for refugees: the more you emit, the more you have to commit to giving citizenship to those who have been rendered homeless by climate change. Note I am saying citizenship, not temporary accommodation in a tent in the desert (or a ship hulk for that matter).

Heads of state who genuinely don't believe that things are that bad, and that it's okay to continue to generate emissions, will find this an easy commitment to make, as it would only kick in when a homeland has become uninhabitable; it should be possible to devise a formal criterion for this: e.g. the place is under water, or experiencing temperatures of 40 degrees or more over a sustained period.

Take, for example, Australia, whose PM, Scott Morrison, tries to blag his way out of a crisis and thinks it's fine to continue mining and exporting fossil fuels. It's ironic that Australia's solution to refugees has previously been to send them off to be imprisoned on a Pacific Island. That sort of solution may no longer be available in future, as Nauru succumbs to rising sea levels.

Given the major role that Australia has played in causing global warming, wouldn't it make sense if at the next COP it was required to step up to its responsibilities and undertake to award citizenship to Pacific Islanders whose homes will be submerged in a few decades? The greater the emissions, the higher the quota of people. And who knows, if it were put to them like that, they might even start taking the phasing out of fossil fuels more seriously?

Sunday, 14 November 2021

Universities vs Elsevier: who has the upper hand?

 

The academic publisher Elsevier is currently negotiating a deal with UK universities. In Oxford, as in other universities, there have been extensive discussions about the proposed deal; the goals are to reduce costs to sustainable levels and to provide full and immediate open access to UK research. I have a nasty feeling that we could end up in the situation of those at the COP summit: with a deal that where the publishers feel they are giving away a huge amount, while the consumers are still unsatisfied. 

In the print era, publishers already had a large profit margin.  I was a journal editor when electronic publishing first came in, and I remember discussions with the publisher - who was clearly very nervous about how this might damage their bottom line. With some clever business practices, such as bundling, they managed to keep going, making more rather than less money.

Other new developments - the advance of metrics and requirements for open access publishing - also pose challenges to them, but their response, like any savvy business, is to take control of these new developments and find a way to profit from them as well.

So should we go along with this? The discussions have largely centred around money, and this is a real concern. Publishers charge massively inflated subscription charges, taking as profit money that our libraries could otherwise put to good use. But the problems go way beyond that, including blocking open access to authors' own work, and giving poor quality and even fraudulent material a veneer of respectability. The publisher currently has considerable power over what gets published but takes little responsibility when things go wrong.

But we also have power, and I think it's time we started to wield it. I have two proposals: one radical and the other less so, both of which go beyond what is currently being considered in the JISC discussions.

The radical option: move publishing in-house  

In an ideal world, we would not have any deals with for-profit publishers. Universities would take control of academic publishing. A model which I have found works well is the Wellcome Trust journal Wellcome Open Research (WOR). I served on its Editorial Advisory Board in the first years of its operation and I have published eight papers in WOR. 

Quality control is maintained by requiring that work published in WOR has been Wellcome-Trust funded. The platform is paid for by the Wellcome Trust, and all material is Open Access, with no charge to authors.

Universities could use this as a model for developing their own platforms for publishing work of their researchers. As with Wellcome Trust, researchers would be encouraged rather than required to publish there. Setting up and maintaining the platform would cost money, but this might be covered by savings from stepping back from deals with big publishers. This is not a perfect solution, of course; we would still want access to past material published by traditional publishers. But I feel this option should be given serious consideration.

The less radical option. An agreement with Elsevier that reflects what we want and need 

If we are to retain a working relationship with Elsevier, then we need to ensure that we are getting value for money, and that they are delivering what we want.

One way in which they fail to do this is through the copyright agreements that they require authors to sign. I agree with Sally Rumsey that any publisher that does not allow an author to retain rights to their own work is problematic. Furthermore, there is concern that even if Elsevier were to say they agreed to author rights retention, they would continue to adopt practices that would mislead authors into signing away their rights.

Another issue is how publishers deal with matters related to misbehaviour by editors and authors. We have a number of serious problems with the academic publishing system that have been developing for years, and have been thrown into stark relief by the pandemic. Publications in journals are a kind of academic currency, with potentially huge rewards in terms of promotion, pay and tenure of academic staff. Unfortunately, this means that some people will go to great lengths to procure publications, and this can include fraudulent means. A reputable publisher should recognise and indeed anticipate such problems and take steps to deal with them. Elsevier has been notably poor in doing so. I will give a handful of examples.

Clearly erroneous/fraudulent work remains in the literature In recent years, a massive problem has been discovered with so-called paper mills - a form of industrialised cheating that involves generation of literally thousands of fake papers.  This phenomenon was discussed at a meeting this summer, the Computational Research Integrity Conference (CRI-Conf), which brought together some of the scientists who uncovered paper mills and representatives of publishers (including Elsevier) and university integrity officers. A repeated complaint of the 'data sleuths' (scientists who uncovered the paper mills) was that when the problems were drawn to the attention of the editors or publishers, typically nothing was done.  Some of these papers are highly cited and have medical implications - see, for instance work by Jennifer Byrne and Cyril LabbĂ© on cancer biology, or by Elisabeth Bik on image manipulations.

To its credit, Elsevier was recently cited as acting promptly when a cache of nonsense papers in special issues was uncovered, but one still has to ask how these papers, some of which were complete gibberish, originally got through the supposedly rigorous peer-review process. For another recent example, see here.

Part of the problem seems to be that there is very little oversight of editors.  I first became aware of Elsevier's lax attitude to research integrity when I discovered a ring of editors. There was overwhelming evidence that a set of editors were publishing one another's papers in their journals, with publication lags so brief as to indicate there had not been peer review. Emails to Elsevier received no reply, and so I started reporting the events on my blog. This created such publicity that Elsevier then posted a reply, but initially they denied that any papers had been published without peer review. Eventually, as the evidence mounted up, they were forced to acknowledge the problem, and claimed to be investigating it, but at no point did they communicate with me, and their investigation went on for years and has never led to any consequences. One of the offending editors, Johnny Matson, quietly disappeared without comment and is now editing another journal for another publisher. The unreviewed papers, many of which are concerned with intervention for children with developmental disorders, remain in the literature. 

More recently I did a bibliometric analysis of editors of psychology journals who publish prolifically in their own journals, and broke down the results by publisher. Elsevier had the highest number of editors-in-chief in this category,

A related case in another field is that of Didier Raoult, who was senior author on a paper advocating hydroxychloroquine as a treatment for Covid-19 that appeared in 2020 in an Elsevier journal for which he was Editor-in-Chief. It has subsequently been shown to be riddled with methodological errors, but, although Elsevier claimed to be investigating it in April 2020, it has not been retracted.

At the CRI-Conf publishers said that they were doing their best, but the problem of fake and fraudulent practices was so huge they could not be expected to tackle all of it. Yet we know publishers make enormous profits, so why aren't they putting more effort into checking the integrity of the work they publish - and dealing promptly with the problems that are drawn to their attention? Elsevier is not the only large publisher to be affected, but if it wants to be seen as a leader in the field, it should recognise that integrity of published material is of supreme importance - especially in clinical fields where people's health and wellbeing can be affected (as in the cases of Raoult and Matson noted above).

If UK Universities do strike a deal with Elsevier, then they should take the opportunity to include in any future contract conditions requirements that: 

(a) Elsevier allows authors the retention of the rights to their publications and  

(b) it provides annually an open report on the number of integrity issues that have been raised about its publications and how these have been dealt with. 

This would go some way toward ensuring that we get value for money, and are not simply adding to the profits of a company whose underlying ethos is not aligned with promoting scientific quality.

Universities have historically been extremely meek in accepting the terms and conditions imposed by big publishers, who have in turn treated academics as a cash cow. We on the one hand produce for free the product that they sell, and then pay to publish and read it. They cannot survive without the research created in Universities: we should start recognising that we need not bow down to their demands.

DISCLAIMER: I am writing in a personal capacity and not as a representative of the University of Oxford.


14/11/21

P.S. Some further examples of integrity problems with Elsevier have been pointed out to me on Twitter:

Geoff Frampton (@Geoff_Frampton) noted 'Elsevier had the largest market share among the publishers of retracted Covid-19 articles in our study

Patricia Murray (@PMurray_65) tweeted: Elsevier have failed to retract this 2008 Lancet paper that is known to be fraudulent and poses a risk to patients: thelancet.com/journals/lance The paper stated the so-called 'bioengineered' airway had "a normal appearance and mechanical properties at 4 months." 

It is particularly shocking that the publisher is still making this article available (at a cost of $31) given that the author had a prison sentence for faked research, and conducted unethical experiments on seriously ill patients: see this report. 

PPS - it gets worse. There have been strenuous attempts to get the journal to retract the paper, to no avail. This has made the national news in the UK.

 

Comments on this blog are welcome, but may only appear after a delay as they are moderated to avoid spam. 


Saturday, 27 March 2021

Open data: We know what's needed - now let's make it happen

©Cartoonstock.com

 

The week after then end of University term and before the Easter break is choc-a-bloc with conferences and meetings. A great thing about them all being virtual is that you can go to far more. My brain is zinging (and my to-do list sadly neglected) after 3 days of the Computational Research Integrity Conference (CRI-Conf) hosted at Syracuse University, and then a 1.5 hr webinar organised by the Center for Biomedical Research Transparency (CBMRT) on Accelerating Science in the Age of COVID-19. Synthesising all the discussion in my head gives one clear message: we need to get a grip on failure of authors to make their data, and analysis scripts, open. 

For a great overview of CRI-Conf, see this blog by one of the presenters, Debora Weber-Wulff . The meeting focused mainly on fraud, and the development of computational methods to detect it, with most emphasis on doctored images. There were presentations by those who detect manipulated data (notably Elisabeth Bik and Jennifer Byrne), by technology experts developing means of automating image analysis, publishers and research integrity officers who attempt to deal with the problem, and by those who have created less conventional means to counteract the tide of dodgy work (Boris Barbour from PubPeer, and Ivan Oransky from Retraction Watch). The shocking thing was the discovery that fabricated data in the literature is not down to a few bad apples: there are "paper mills" that generate papers for sale, which are readily snapped up by those who need them for professional advancement.

CRI-Conf brought together people who viewed the problem from very different perspectives, with a definite tension between those representing "the system" - publishers, institutions and funders - and those on the outside - the data sleuths, PubPeer, Retraction Watch. The latter are impatient at the failure of the system to act promptly to remove fraudulent papers from the literature; the former respond that they are doing a lot already, but the problem is immense and due process must be followed. There was, however, one point of agreement. Life would be easier for all of us if data were routinely published with papers. Research integrity officers in particular noted that a great deal of time in investigations is spent tracking down data.

The CBMRT webinar yesterday was particularly focused on the immense amount of research that has been generated by the COVID-19 pandemic. Ida Sim from Vivli noted that only 70 of 924 authors of COVID-19 clinical trials agreed to share their data within 6 months. John Inglis, co-founder of biorXiv and medrXiv cited Marty Lakary's summary of preprints: "a great disruptor of a clunky, slow system never designed for a pandemic". Deborah Dixon from Oxford University Press noted how open science assumed particular importance in the pandemic: open data not only make it possible to check a study's findings, but also can be used fruitfully as secondary data for new analyses. Finally, a clinician's perspective was provided by Sandra Petty. Her view is that there are too many small underpowered studies: we need large collaborative trials. My favourite quote: "Noise from smaller studies rapidly released to the public domain can create a public health issue in itself".

Everyone agreed that open data could be a game-changer, but clearly it was still the exception rather than the rule. I asked whether it should be made mandatory, not just for journal articles but also for preprints. The replies were not encouraging. Ida Sim, who understands the issues around data-sharing better than most, noted that there were numerous barriers - there may be legal hurdles to overcome, and those doing trials may not have the expertise, let alone the time, to get their data into appropriate format. John Inglis noted it would be difficult for moderators of preprint servers to implement a data-sharing requirement for pre-prints, and that many authors would find it challenging.

I am not, however, convinced. It's clear that there is a tsunami of research on COVID-19, much of it of very poor quality. This is creating problems for journals, reviewers, and for readers, who have to sift through a mountain of papers to try and extract a signal from the noise. Setting the bar for publication - or indeed pre-prints - higher, so that the literature only contains papers that can be (a) checked and (b) used for meta-analysis and secondary studies, would reduce the mountain to a hillock and allow us to focus on the serious stuff. 

The pandemic has indeed accelerated the pace of research, but things done in a hurry are more likely to contain errors, so it is more important than ever to be able to check findings, rather than just trusting authors to get it right. I'm thinking of the recent example where an apparent excess of COVID-19 in toddlers was found to be due to restricting age to a 2-digit number, so someone aged 102 would be listed as a 2-year-old.  We may be aghast, but I feel "there but for the grace of God go I". Unintentional errors are everywhere, and when the stakes are as high as they are now, we need to be able to check and double-check findings studies that are going to translated into clinical practice. That means sharing analysis code as well as data. As Philip Stark memorably said, "Science should be 'show me', not 'trust me'".

All in all, my sense is that we still have a long way to go before we shift the balance in publishing from a focus on the needs of authors (to get papers out rapidly) to an emphasis on users of research, for whom the integrity of the science is paramount. As Besançon et al put it: Open Science saves lives.

Saturday, 13 March 2021

Time for publishers to consider the rights of readers as well as authors

 

© cartoonstock.com
I've just been reading this piece entitled: "Publication ethics: Barriers in resolving figure corrections" by Lataisia Jones, on the website of the American Society for Microbiology, which publishes several journals.  Microbiology is a long way from my expertise and interests, but I have been following the work of Elisabeth Bik, datasleuth extraordinaire, for some time - see here. As Bik points out, the responses (or more often lack of response) she gets when she raises concerns about papers are similar to those seen in other fields where whistleblowers try to flag up errors  (e.g. this Australian example). 

It's clear that there are barriers to correcting the scientific record when errors are identified, and so I was pleased to see a piece tackling this head-on, which attempts to explain why responses by journals and publishers often appear to be so slow and unsatisfactory. However, I felt the post, missed some key points that need to be taken seriously by publishers and editors. 

The post starts by saying that: "Most figure concerns are created out of error and may present themselves in the form of image duplication, splicing and various figure enhancements." I think we need to have that "most" clarified in the form of a percentage. Yes, of course, we all make mistakes, but many of the issues flagged up by Bik are not the kinds of error made by someone going "oops" as they prepare their figures. I felt that on the one hand it is crucial to be aware that many papers are flawed because they contain honest errors, but that fact should not lead us to conclude that most cases of problematic images are of this kind. At least, not until there is hard evidence on that point. 

The post goes on to document the stages that are gone through when an error has been flagged up, noting in particular these guidelines produced by the Committee on Publication Ethics (COPE). First, the author is contacted. "Since human error is a common reason behind published figure concerns, ASM remains mindful and vigilant while investigating to prevent unnecessarily tarnishing a researcher’s reputation. Oftentimes, the concern does not proceed past the authors, who tend to be extremely responsive." So here again, Jones emphasises human error as a "common reason" for mistakes in figures, and also describes authors as "extremely responsive". And here again, I suggest some stastistics on both points would be of considerable interest. 

Jones explains that this preliminary step may take a long time when several years have elapsed between publication and the flagging up of concerns. The authors may be hard to contact, and the data may no longer be available. Assuming the authors give a satisfactory response, what happens next depends on whether the error can be corrected without changing the basic results or conclusions. If so, then a Correction is published. Interestingly, Jones says nothing about what happens if an honest error does change the basic results or conclusions. I think many readers would agree that in that case there should be a retraction, but I sense a reluctance to accept that, perhaps because Jones appears to identify retraction with malpractice. 

She describes the procedure followed by ASM if the authors do not have a satisfactory response: the problem is passed on to the authors' institution for investigation. As Jones points out, this can be an extended process, as it may require identification of old data, and turn into an inquiry into possible malpractice. Such enquiries often move slowly because the committee members responsible for this work are doing their investigations on top of their regular job. And, as Jones notes: "Additionally, multiple figure concerns and multiple papers take longer to address and recovering the original data files could take months alone." So, the institution feeds back its conclusions (typically after months or possibly years), which may return us to the point where it is decided a Correction is appropriate. But, "If the figure concerns are determined to have been made intentionally or through knowingly manipulating the data, the funding agencies are notified." And yet another investigation starts up, adding a few more months or years to the process. 

So my reading of this is that if the decision to make a Correction is not reached, the publisher and journal at this point hand all responsibility over to other agencies - the institution and the funders. The post by Jones at no point mentions the conditions that need to be met for the paper to actually be retracted (in which case it remains in the public domain but with a retraction notice) or withdrawn (in which case it is removed). Indeed, the word 'retract' does not appear at all in her piece. 

What else is missing from all of this? Any sense of responsibility to other researchers and the general public. A peer-reviewed published article is widely regarded as a credible piece of work. It may be built on by other researchers, who assume they can trust the findings. Its results may be used to inform treatment of patients or, in other fields, public policy. Leaving an erroneous piece of work in a peer-reviewed journal without any indication that concerns have been raised is rather like leaving a plate of cookies out for public consumption, when you know they may be contaminated. 

Ethical judgements by publishers need to consider their readers, as well as their authors. I would suggest they should give particularly high priority to published articles where concerns have not been adequately addressed by authors, and which also have been cited by others. The more citations, the greater the urgency to act, as citations spawn citations, with the work achieving canonical status in some cases. In addition, if there are multiple papers by the same author with concerns, surely this should be treated as a smoking gun, rather than an excuse for why it takes years to act.

It should not be necessary to wait until institutions and funders have completed investigations into possible malpractice. Malpractice is actually a separate issue here: the key point for readers of the journal is whether the published record is accurate. If it is inaccurate - either due to honest error or malpractice - the work should be retracted, and there is plenty of precedent for retraction notices to specify the reason for retraction. This also applies to the situation where there is a realistic concern about the work (such as manipulated figures or internally inconsistent data) and the author cannot produce the raw data that would allow for the error to be identified and corrected. In short, it should be up to the author to ensure that the work is transparent and reproducible. Retaining erroneous work in a journal is not a neutral act. It pollutes the scientific literature and ignores the rights of readers not to be misled or misinformed.

Wednesday, 3 March 2021

University staff cuts under the cover of a pandemic: the cases of Liverpool and Leicester

I had a depressing sense of déjà vu last week on learning that two UK Universities, University of Liverpool and University of Leicester, had plans for mass staff redundancies, affecting many academic psychologists among others. I blogged about a similar situation affecting Kings College London 7 years ago.

I initially wondered whether these new actions were related to the adverse effect of the pandemic on university finances, but it's clear that both institutions have been trying to bring in staff cuts for some years. The pandemic seems not so much the reason for the redundancies as a smokescreen behind which university administration can smuggle in unpopular measures.  

Nothing, of course, is for ever, and universities have to change with the times. But in both these cases, the way in which cuts are being made seems both heartless and stupid, and has understandably attracted widespread condemnation (see public letters below). There are differences in the criteria used to select people for redundancy at the two institutions, but the consequences are similar.  

The University of Liverpool has used a metrics-based approach, singling out people from the Faculty of Health and Life Sciences who don't meet cutoffs in terms of citations and grant income. Elizabeth Gadd (@LizzieGadd) noted on Twitter that SciVal's Field-Weighted Citation Index, which was being used to evaluate staff, is unstable and unreliable with small sample sizes . Meanwhile, and apparently in a different universe, the University has recently advertised for a "Responsible Metrics Implementation Officer", funded by the Wellcome Trust, whose job is to "lead the implementation of a project to embed the principles of the Declaration on Research Assessment (DORA) in the university's practice". Perhaps their first job will be to fire the people who have badly damaged the University's reputation by their irresponsible use of metrics (see also this critique in Times Higher Education).  

It is worth noting too that the advertisement proudly boasts that the University is a recipient of an Athena SWAN silver award, when among those targeted for redundancy are some who have worked tirelessly on Athena SWAN applications for institutes across the faculty where redundancies are planned. Assembling one of these applications is widely recognised as taking up as much time as the preparation of a major grant proposal. There won't be another Athena SWAN exercise for a few years and so it seems the institution is happy to dispose of the services of those who worked to achieve this accolade.  

The University of Leicester has adopted a different strategy, compiling a hit list that is based not on metrics, but on subject area. They are discussing proposals to close* redundancies at five academic departments and three professional services units in order to "secure the future of the university". Staff who have worked at the university for decades have described the surreal experience of receiving notifications of the planned job losses alongside warm-hearted messages emphasising how much the university wants to support their mental health and wellbeing.  

I lived through a purge some 20 years ago when I was based at the Medical Research Council's Applied Psychology Unit in Cambridge. MRC used the inter-regnum period between one director retiring and a new one being appointed to attempt to relocate staff who were deemed to be underperforming. As with all these exercises, management appeared to envisage an institution that will remain exactly as before with a few people subtracted. But of course it doesn't work that way. Ours was a small and highly collegial research unit, and living through the process of having everyone undergoing a high-stakes evaluation affected all of us. I can't remember how long the process went on for, but it felt like years. A colleague who, like me, was not under threat, captured it well when he said that he had previously felt a warm glow when he received mail with the MRC letterhead. Now he felt a cold chill in anticipation of some fresh hell. I've had similar conversations at other universities with senior academics who can recall times when those managing the institution were regarded as benevolent, albeit in a (desirably) distant manner. Perhaps it helped that in those days vice chancellors often came from the ranks of the institution's senior academics, and saw their primary goal as ensuring a strong reputation for teaching and research.  

As the sector has been increasingly squeezed over the years, this cosy scenario has been overturned, with a new cadre of managers appearing, with an eye on the bottom line. The focus has been on attracting student consumers with glitzy buildings and facilities, setting up overseas campuses to boost revenues, and recruiting research superstars who will embellish the REF portfolio. Such strategies seldom succeeded, with many universities left poorer by the same vice-chancellors who were appointed because of their apparent business acumen.  

There has been a big shift from the traditional meaning of "university" as a community of teachers and scholars. Academic staff are seen as dispensable, increasingly employed on short-term contracts. Whereas in the past there might be a cadre of academics who felt loyal to their institution and pride in being part of a group dedicated to the furtherance of knowledge, we now have a precariat who live in fear of what might happen if targets are not met. And this is all happening at a time when funders are realising that effective research is done by teams of people, rather than lone geniuses (see e.g. this report). Such teams can take years to build, but can be destroyed overnight, by those who measure academic worth by criteria such as grant income, or whether the Vice Chancellor understands the subject matter. I wonder too what plans there are for graduate students whose supervisors are unexpectedly dismissed - if interdependence of the academic community is ignored, there will be impacts that go beyond those in the immediate firing line.  

Those overseeing redundancies think they can cut out a pound of flesh from the university body, but unlike Shylock, who knew full well what he was doing, they seem to believe they can do so without weakening the whole institution. They will find to their cost that they are doing immense damage, not just to their reputation, but to the satisfaction and productivity of all who work and study there.  

Public letters of concern 

University of Leicester, Departments of Neuroscience, Psychology and Behaviour https://tinyurl.com/SaveNPB-UoL  

University of Liverpool, use of inappropriate metrics https://docs.google.com/document/d/1OJ28MT78MCMNkUtFXkLw3xROUxK7Mfr8yN8b-E2KDUg/edit#  

Equality and Diversity implications, University of Liverpool https://forms.gle/2AyJ2nHKiEchA3dP9

 

*correction made 5th March 2021 

Saturday, 23 January 2021

Time to ditch relative risk in media reports

The Winton Centre for Risk and Evidence Communication at the University of Cambridge has done some sterling work in developing guidelines for communicating risk to the general public. In a short video,  David Spiegelhalter explains how relative risk can be misleading when the baseline for a condition is not reported. For instance, he noted that many women stopped taking a contraceptive pill after hearing media reports that it was associated with a doubling in the rate of thrombo-embolism. In terms of absolute risk the increase sounds much less alarming, going from 1 in 7000 to 2 in 7000. 

One can understand how those who aren't scientifically trained can get this wrong. But we might hope that,  in a pandemic, where public understanding of risk is so crucial, particular care would be taken to be realistic without being alarmist. It was, therefore, depressing to see a report on Channel 4 news last night where two scientists clearly explained the evidence on Covid variants in terms of absolute risk, impeccably reflecting the Winton Centre's advice, only to have the reporters translate the numbers into relative risk. I have transcribed the relevant sections: 

0:44 Reporter: "The latest evidence from the government's advisers is that this new variant is more deadly. And this is what it means:"

Patrick Vallance: "If you took somebody in their sixties, a man in their sixties, the average risk is that for 1000 people who got infected, roughly ten would be expected to unfortunately die with the virus. With the new variant, with 1000 people infected, roughly 13 or 14 people might be expected to die." 

Reporter: "That’s a thirty to forty per cent increase in mortality." 

5:15 Reporter (Krishnan Guru-Murthy): "But this is a high percent of increase, isn't it. Thirty to forty percent increase of mortality, on a relatively small number." 

Neil Ferguson: "Yes. For a 60-year-old at the current time, there's about a one in a hundred risk of dying. So that means 10 in 1000 people who get the infection are likely to die, despite improvements in treatment. And this new variant might take that up to 13 or 14 in a 1000." 

The reporters are likely to stoke anxiety when they translate clear communication by the scientists into something that sounds a lot more scary. I hope this is not their intention: Channel 4 is one of the few news outlets that I regularly watch and in general I find it well-researched and informative. I would urge the reporters to watch the Winton Centre video, which in less than 5 minutes makes a clear, compelling case for dropping relative risk altogether in media reports. 

 

This blogpost has been corrected to remove the name of Anja Popp as first reporter. She confirmed was not in this segment.  My apologies.