Saturday, 27 March 2021

Open data: We know what's needed - now let's make it happen

©Cartoonstock.com

 

The week after then end of University term and before the Easter break is choc-a-bloc with conferences and meetings. A great thing about them all being virtual is that you can go to far more. My brain is zinging (and my to-do list sadly neglected) after 3 days of the Computational Research Integrity Conference (CRI-Conf) hosted at Syracuse University, and then a 1.5 hr webinar organised by the Center for Biomedical Research Transparency (CBMRT) on Accelerating Science in the Age of COVID-19. Synthesising all the discussion in my head gives one clear message: we need to get a grip on failure of authors to make their data, and analysis scripts, open. 

For a great overview of CRI-Conf, see this blog by one of the presenters, Debora Weber-Wulff . The meeting focused mainly on fraud, and the development of computational methods to detect it, with most emphasis on doctored images. There were presentations by those who detect manipulated data (notably Elisabeth Bik and Jennifer Byrne), by technology experts developing means of automating image analysis, publishers and research integrity officers who attempt to deal with the problem, and by those who have created less conventional means to counteract the tide of dodgy work (Boris Barbour from PubPeer, and Ivan Oransky from Retraction Watch). The shocking thing was the discovery that fabricated data in the literature is not down to a few bad apples: there are "paper mills" that generate papers for sale, which are readily snapped up by those who need them for professional advancement.

CRI-Conf brought together people who viewed the problem from very different perspectives, with a definite tension between those representing "the system" - publishers, institutions and funders - and those on the outside - the data sleuths, PubPeer, Retraction Watch. The latter are impatient at the failure of the system to act promptly to remove fraudulent papers from the literature; the former respond that they are doing a lot already, but the problem is immense and due process must be followed. There was, however, one point of agreement. Life would be easier for all of us if data were routinely published with papers. Research integrity officers in particular noted that a great deal of time in investigations is spent tracking down data.

The CBMRT webinar yesterday was particularly focused on the immense amount of research that has been generated by the COVID-19 pandemic. Ida Sim from Vivli noted that only 70 of 924 authors of COVID-19 clinical trials agreed to share their data within 6 months. John Inglis, co-founder of biorXiv and medrXiv cited Marty Lakary's summary of preprints: "a great disruptor of a clunky, slow system never designed for a pandemic". Deborah Dixon from Oxford University Press noted how open science assumed particular importance in the pandemic: open data not only make it possible to check a study's findings, but also can be used fruitfully as secondary data for new analyses. Finally, a clinician's perspective was provided by Sandra Petty. Her view is that there are too many small underpowered studies: we need large collaborative trials. My favourite quote: "Noise from smaller studies rapidly released to the public domain can create a public health issue in itself".

Everyone agreed that open data could be a game-changer, but clearly it was still the exception rather than the rule. I asked whether it should be made mandatory, not just for journal articles but also for preprints. The replies were not encouraging. Ida Sim, who understands the issues around data-sharing better than most, noted that there were numerous barriers - there may be legal hurdles to overcome, and those doing trials may not have the expertise, let alone the time, to get their data into appropriate format. John Inglis noted it would be difficult for moderators of preprint servers to implement a data-sharing requirement for pre-prints, and that many authors would find it challenging.

I am not, however, convinced. It's clear that there is a tsunami of research on COVID-19, much of it of very poor quality. This is creating problems for journals, reviewers, and for readers, who have to sift through a mountain of papers to try and extract a signal from the noise. Setting the bar for publication - or indeed pre-prints - higher, so that the literature only contains papers that can be (a) checked and (b) used for meta-analysis and secondary studies, would reduce the mountain to a hillock and allow us to focus on the serious stuff. 

The pandemic has indeed accelerated the pace of research, but things done in a hurry are more likely to contain errors, so it is more important than ever to be able to check findings, rather than just trusting authors to get it right. I'm thinking of the recent example where an apparent excess of COVID-19 in toddlers was found to be due to restricting age to a 2-digit number, so someone aged 102 would be listed as a 2-year-old.  We may be aghast, but I feel "there but for the grace of God go I". Unintentional errors are everywhere, and when the stakes are as high as they are now, we need to be able to check and double-check findings studies that are going to translated into clinical practice. That means sharing analysis code as well as data. As Philip Stark memorably said, "Science should be 'show me', not 'trust me'".

All in all, my sense is that we still have a long way to go before we shift the balance in publishing from a focus on the needs of authors (to get papers out rapidly) to an emphasis on users of research, for whom the integrity of the science is paramount. As Besançon et al put it: Open Science saves lives.

Saturday, 13 March 2021

Time for publishers to consider the rights of readers as well as authors

 

© cartoonstock.com
I've just been reading this piece entitled: "Publication ethics: Barriers in resolving figure corrections" by Lataisia Jones, on the website of the American Society for Microbiology, which publishes several journals.  Microbiology is a long way from my expertise and interests, but I have been following the work of Elisabeth Bik, datasleuth extraordinaire, for some time - see here. As Bik points out, the responses (or more often lack of response) she gets when she raises concerns about papers are similar to those seen in other fields where whistleblowers try to flag up errors  (e.g. this Australian example). 

It's clear that there are barriers to correcting the scientific record when errors are identified, and so I was pleased to see a piece tackling this head-on, which attempts to explain why responses by journals and publishers often appear to be so slow and unsatisfactory. However, I felt the post, missed some key points that need to be taken seriously by publishers and editors. 

The post starts by saying that: "Most figure concerns are created out of error and may present themselves in the form of image duplication, splicing and various figure enhancements." I think we need to have that "most" clarified in the form of a percentage. Yes, of course, we all make mistakes, but many of the issues flagged up by Bik are not the kinds of error made by someone going "oops" as they prepare their figures. I felt that on the one hand it is crucial to be aware that many papers are flawed because they contain honest errors, but that fact should not lead us to conclude that most cases of problematic images are of this kind. At least, not until there is hard evidence on that point. 

The post goes on to document the stages that are gone through when an error has been flagged up, noting in particular these guidelines produced by the Committee on Publication Ethics (COPE). First, the author is contacted. "Since human error is a common reason behind published figure concerns, ASM remains mindful and vigilant while investigating to prevent unnecessarily tarnishing a researcher’s reputation. Oftentimes, the concern does not proceed past the authors, who tend to be extremely responsive." So here again, Jones emphasises human error as a "common reason" for mistakes in figures, and also describes authors as "extremely responsive". And here again, I suggest some stastistics on both points would be of considerable interest. 

Jones explains that this preliminary step may take a long time when several years have elapsed between publication and the flagging up of concerns. The authors may be hard to contact, and the data may no longer be available. Assuming the authors give a satisfactory response, what happens next depends on whether the error can be corrected without changing the basic results or conclusions. If so, then a Correction is published. Interestingly, Jones says nothing about what happens if an honest error does change the basic results or conclusions. I think many readers would agree that in that case there should be a retraction, but I sense a reluctance to accept that, perhaps because Jones appears to identify retraction with malpractice. 

She describes the procedure followed by ASM if the authors do not have a satisfactory response: the problem is passed on to the authors' institution for investigation. As Jones points out, this can be an extended process, as it may require identification of old data, and turn into an inquiry into possible malpractice. Such enquiries often move slowly because the committee members responsible for this work are doing their investigations on top of their regular job. And, as Jones notes: "Additionally, multiple figure concerns and multiple papers take longer to address and recovering the original data files could take months alone." So, the institution feeds back its conclusions (typically after months or possibly years), which may return us to the point where it is decided a Correction is appropriate. But, "If the figure concerns are determined to have been made intentionally or through knowingly manipulating the data, the funding agencies are notified." And yet another investigation starts up, adding a few more months or years to the process. 

So my reading of this is that if the decision to make a Correction is not reached, the publisher and journal at this point hand all responsibility over to other agencies - the institution and the funders. The post by Jones at no point mentions the conditions that need to be met for the paper to actually be retracted (in which case it remains in the public domain but with a retraction notice) or withdrawn (in which case it is removed). Indeed, the word 'retract' does not appear at all in her piece. 

What else is missing from all of this? Any sense of responsibility to other researchers and the general public. A peer-reviewed published article is widely regarded as a credible piece of work. It may be built on by other researchers, who assume they can trust the findings. Its results may be used to inform treatment of patients or, in other fields, public policy. Leaving an erroneous piece of work in a peer-reviewed journal without any indication that concerns have been raised is rather like leaving a plate of cookies out for public consumption, when you know they may be contaminated. 

Ethical judgements by publishers need to consider their readers, as well as their authors. I would suggest they should give particularly high priority to published articles where concerns have not been adequately addressed by authors, and which also have been cited by others. The more citations, the greater the urgency to act, as citations spawn citations, with the work achieving canonical status in some cases. In addition, if there are multiple papers by the same author with concerns, surely this should be treated as a smoking gun, rather than an excuse for why it takes years to act.

It should not be necessary to wait until institutions and funders have completed investigations into possible malpractice. Malpractice is actually a separate issue here: the key point for readers of the journal is whether the published record is accurate. If it is inaccurate - either due to honest error or malpractice - the work should be retracted, and there is plenty of precedent for retraction notices to specify the reason for retraction. This also applies to the situation where there is a realistic concern about the work (such as manipulated figures or internally inconsistent data) and the author cannot produce the raw data that would allow for the error to be identified and corrected. In short, it should be up to the author to ensure that the work is transparent and reproducible. Retaining erroneous work in a journal is not a neutral act. It pollutes the scientific literature and ignores the rights of readers not to be misled or misinformed.

Wednesday, 3 March 2021

University staff cuts under the cover of a pandemic: the cases of Liverpool and Leicester

I had a depressing sense of déjà vu last week on learning that two UK Universities, University of Liverpool and University of Leicester, had plans for mass staff redundancies, affecting many academic psychologists among others. I blogged about a similar situation affecting Kings College London 7 years ago.

I initially wondered whether these new actions were related to the adverse effect of the pandemic on university finances, but it's clear that both institutions have been trying to bring in staff cuts for some years. The pandemic seems not so much the reason for the redundancies as a smokescreen behind which university administration can smuggle in unpopular measures.  

Nothing, of course, is for ever, and universities have to change with the times. But in both these cases, the way in which cuts are being made seems both heartless and stupid, and has understandably attracted widespread condemnation (see public letters below). There are differences in the criteria used to select people for redundancy at the two institutions, but the consequences are similar.  

The University of Liverpool has used a metrics-based approach, singling out people from the Faculty of Health and Life Sciences who don't meet cutoffs in terms of citations and grant income. Elizabeth Gadd (@LizzieGadd) noted on Twitter that SciVal's Field-Weighted Citation Index, which was being used to evaluate staff, is unstable and unreliable with small sample sizes . Meanwhile, and apparently in a different universe, the University has recently advertised for a "Responsible Metrics Implementation Officer", funded by the Wellcome Trust, whose job is to "lead the implementation of a project to embed the principles of the Declaration on Research Assessment (DORA) in the university's practice". Perhaps their first job will be to fire the people who have badly damaged the University's reputation by their irresponsible use of metrics (see also this critique in Times Higher Education).  

It is worth noting too that the advertisement proudly boasts that the University is a recipient of an Athena SWAN silver award, when among those targeted for redundancy are some who have worked tirelessly on Athena SWAN applications for institutes across the faculty where redundancies are planned. Assembling one of these applications is widely recognised as taking up as much time as the preparation of a major grant proposal. There won't be another Athena SWAN exercise for a few years and so it seems the institution is happy to dispose of the services of those who worked to achieve this accolade.  

The University of Leicester has adopted a different strategy, compiling a hit list that is based not on metrics, but on subject area. They are discussing proposals to close* redundancies at five academic departments and three professional services units in order to "secure the future of the university". Staff who have worked at the university for decades have described the surreal experience of receiving notifications of the planned job losses alongside warm-hearted messages emphasising how much the university wants to support their mental health and wellbeing.  

I lived through a purge some 20 years ago when I was based at the Medical Research Council's Applied Psychology Unit in Cambridge. MRC used the inter-regnum period between one director retiring and a new one being appointed to attempt to relocate staff who were deemed to be underperforming. As with all these exercises, management appeared to envisage an institution that will remain exactly as before with a few people subtracted. But of course it doesn't work that way. Ours was a small and highly collegial research unit, and living through the process of having everyone undergoing a high-stakes evaluation affected all of us. I can't remember how long the process went on for, but it felt like years. A colleague who, like me, was not under threat, captured it well when he said that he had previously felt a warm glow when he received mail with the MRC letterhead. Now he felt a cold chill in anticipation of some fresh hell. I've had similar conversations at other universities with senior academics who can recall times when those managing the institution were regarded as benevolent, albeit in a (desirably) distant manner. Perhaps it helped that in those days vice chancellors often came from the ranks of the institution's senior academics, and saw their primary goal as ensuring a strong reputation for teaching and research.  

As the sector has been increasingly squeezed over the years, this cosy scenario has been overturned, with a new cadre of managers appearing, with an eye on the bottom line. The focus has been on attracting student consumers with glitzy buildings and facilities, setting up overseas campuses to boost revenues, and recruiting research superstars who will embellish the REF portfolio. Such strategies seldom succeeded, with many universities left poorer by the same vice-chancellors who were appointed because of their apparent business acumen.  

There has been a big shift from the traditional meaning of "university" as a community of teachers and scholars. Academic staff are seen as dispensable, increasingly employed on short-term contracts. Whereas in the past there might be a cadre of academics who felt loyal to their institution and pride in being part of a group dedicated to the furtherance of knowledge, we now have a precariat who live in fear of what might happen if targets are not met. And this is all happening at a time when funders are realising that effective research is done by teams of people, rather than lone geniuses (see e.g. this report). Such teams can take years to build, but can be destroyed overnight, by those who measure academic worth by criteria such as grant income, or whether the Vice Chancellor understands the subject matter. I wonder too what plans there are for graduate students whose supervisors are unexpectedly dismissed - if interdependence of the academic community is ignored, there will be impacts that go beyond those in the immediate firing line.  

Those overseeing redundancies think they can cut out a pound of flesh from the university body, but unlike Shylock, who knew full well what he was doing, they seem to believe they can do so without weakening the whole institution. They will find to their cost that they are doing immense damage, not just to their reputation, but to the satisfaction and productivity of all who work and study there.  

Public letters of concern 

University of Leicester, Departments of Neuroscience, Psychology and Behaviour https://tinyurl.com/SaveNPB-UoL  

University of Liverpool, use of inappropriate metrics https://docs.google.com/document/d/1OJ28MT78MCMNkUtFXkLw3xROUxK7Mfr8yN8b-E2KDUg/edit#  

Equality and Diversity implications, University of Liverpool https://forms.gle/2AyJ2nHKiEchA3dP9

 

*correction made 5th March 2021 

Saturday, 23 January 2021

Time to ditch relative risk in media reports

The Winton Centre for Risk and Evidence Communication at the University of Cambridge has done some sterling work in developing guidelines for communicating risk to the general public. In a short video,  David Spiegelhalter explains how relative risk can be misleading when the baseline for a condition is not reported. For instance, he noted that many women stopped taking a contraceptive pill after hearing media reports that it was associated with a doubling in the rate of thrombo-embolism. In terms of absolute risk the increase sounds much less alarming, going from 1 in 7000 to 2 in 7000. 

One can understand how those who aren't scientifically trained can get this wrong. But we might hope that,  in a pandemic, where public understanding of risk is so crucial, particular care would be taken to be realistic without being alarmist. It was, therefore, depressing to see a report on Channel 4 news last night where two scientists clearly explained the evidence on Covid variants in terms of absolute risk, impeccably reflecting the Winton Centre's advice, only to have the reporters translate the numbers into relative risk. I have transcribed the relevant sections: 

0:44 Reporter: "The latest evidence from the government's advisers is that this new variant is more deadly. And this is what it means:"

Patrick Vallance: "If you took somebody in their sixties, a man in their sixties, the average risk is that for 1000 people who got infected, roughly ten would be expected to unfortunately die with the virus. With the new variant, with 1000 people infected, roughly 13 or 14 people might be expected to die." 

Reporter: "That’s a thirty to forty per cent increase in mortality." 

5:15 Reporter (Krishnan Guru-Murthy): "But this is a high percent of increase, isn't it. Thirty to forty percent increase of mortality, on a relatively small number." 

Neil Ferguson: "Yes. For a 60-year-old at the current time, there's about a one in a hundred risk of dying. So that means 10 in 1000 people who get the infection are likely to die, despite improvements in treatment. And this new variant might take that up to 13 or 14 in a 1000." 

The reporters are likely to stoke anxiety when they translate clear communication by the scientists into something that sounds a lot more scary. I hope this is not their intention: Channel 4 is one of the few news outlets that I regularly watch and in general I find it well-researched and informative. I would urge the reporters to watch the Winton Centre video, which in less than 5 minutes makes a clear, compelling case for dropping relative risk altogether in media reports. 

 

This blogpost has been corrected to remove the name of Anja Popp as first reporter. She confirmed was not in this segment.  My apologies.

Sunday, 6 December 2020

Faux peer-reviewed journals: a threat to research integrity

 

Despite all its imperfections, peer review is one marker of scientific quality – it indicates that an article has been evaluated prior to publication by at least one, and usually several, experts in the field. An academic journal that does not use peer review would not usually be regarded as a serious source and we would not expect to see it listed in a database such as Clarivate Analytic's Web of Science Core Collection which "includes only journals that demonstrate high levels of editorial rigor and best practice". Scientists are often evaluated by their citations in Web of Science, with the assumption that this will include only peer-reviewed articles. This makes gaming of citations harder than is the case for less selective databases such as Google Scholar. The selective criteria for inclusion, and claims by Clarivate Analytics to take research integrity very seriously, are presumably the main reasons why academic institutions are willing to pay for access to Web of Science, rather than relying on Google Scholar. 

Nevertheless, some journals listed in Web of Science include significant numbers of documents that are not peer-reviewed. I first became aware of this when investigating the publishing profiles of authors with remarkably high rates of publications in a small number of journals. I found that Mark Griffiths, a hyperprolific author who has been interviewed about his astounding rate of publication by the Times Higher Education, has a junior collaborator, Mohammed Mamun, who clearly sees Griffiths as a role model and is starting to rival him in publication rate. Griffiths is a co-author on 31 of 33 publications authored by Mamun since 2019. While still an undergraduate, Mamun has become the self-styled Director of the Undergraduate Research Organization in Dhaka, subsequently renamed as the Centre for Health Innovation, Networking, Training, Action and Research – Bangladesh. These institutions do not appear to have any formal link with an academic institution, though on ORCID, Mamun lists an ongoing educational affiliation to Jahangirnagar University. His H-index from Web of Science is 11. This drops if one excludes self-citations, which constitute around half of his citations, but nevertheless, this is remarkable for an undergraduate.

Of the 31 papers listed on Web of Science as coauthored by Mamun and Griffiths, 19 are categorised as letters to the Editor. Letters are a rather odd and heterogeneous category. In most journals they will be brief comments on papers published in the journal, or responses to such comments, and in such cases it is not unusual for the editor to make a decision to publish or not without seeking peer review. However, the letters coauthored by Griffiths and Mamun go beyond this kind of content, and include some brief reports of novel data, as well as case reports on suicide or homicide gleaned from press reports*. I took a closer look at three journals where these letters appeared to try and understand how such material fitted in with their publication criteria. 

The International Journal of Mental Health and Addiction (Springer) featured in an earlier blogpost, on the basis of publishing a remarkably high number of articles authored by Griffiths. In that analysis I did not include letters. The journal gives no guidelines about the format or content of letters, and has published only 16 of them since 2019, but 10 of these are authored by Griffiths, mostly with Mamun. As noted in my prior blogpost, the journal provides no information about dates of submission and acceptance, so one cannot tell whether letters were peer-reviewed. The publisher told me last July and confirmed again in September that they are investigating the issues I raised in my last blogpost, but there has to date been no public report on the outcome.  

Psychiatry Research, published by Elsevier, is explicit that Case Reports can be included as letters, and provides formatting information (750-1000 words or less, up to 5 references, no tables or figures). Before 2019, around 2-4% of publications in the journal were letters, but this jumped to an astounding 22.4% in 2020, perhaps reflecting what has been termed 'covidization' of research.

The Asian Journal of Psychiatry (AsJP), also published by Elsevier, provides formatting information only (600-800 words, 10 references, 1 figure or table), and does not specify what would constitute the content of letters, but it publishes an extremely large number of them, as shown in Figure 1. This trend started before 2020, so cannot be attributed solely to COVID-mania. 

 

Figure 1: Categorization of documents published in Asian Journal of Psychiatry between 2015 and 2020.

For most journals, letters constitute a negligible proportion of their output, and so it is unlikely to have much impact whether or not they are peer reviewed. However, for a journal like AsJP, where letters outnumber regular articles, the peer review status of letters becomes of interest.  

The only information one can obtain on this point is indirect, by scanning the submission and acceptance dates of articles to see if the lag between these two is so short as to suggest there was no peer review. Relevant data for all letters published in AsJP and Psychiatry Research are shown in Table 1. It is apparent that there is a wide range of publication lags, some extending for weeks or months, but that lags of 7 days or less are not uncommon. There is no indication that Mamun/Griffiths have favourable treatment, but they benefit from the same rapid acceptance rate as other letters, with 40-50% chance of acceptance within two weeks.  

 

Table 1: Proportions of Letters categorized by publication lag in two journals publishing a large number of Letters. 

Thus an investigation into unusual publishing patterns by one pair of authors has drawn attention to at least two journals that appear to provide an easy way to accumulate a large number of publications that are not peer-reviewed but are nevertheless cited in Web of Science. If one adds a high rate of self-citation to the mix, and a willingness of editors to accept recycled newspaper reports as case studies, one starts to understand how it is possible for an inexperienced undergraduate to build up a substantial number of citations. 

I have drawn this situation to the attention of the integrity officers at Springer and Elsevier, but my previous experience with the Matson publication ring does not instil me with confidence that publishers will take any steps to monitor or control such editorial practices.  

Clarivate Analytics recently published a report on Research Integrity, in which they note how "all manner of interventions and manipulations are nowadays directed to the goal of attaining scores and a patina of prestige, for individual or journal, although it may be a thin coat hiding a base metal." They described various methods used by researchers to game citations, but did not include the use of non-peer-reviewed categories of paper to gain numerous citations. They describe technological solutions to improve integrity, but I would argue they need to add to their armamentarium a focus on the peer review process. The lag between submission and acceptance is a far from perfect indicator but it can give a rough proxy for the likelihood that peer review was undertaken. Requiring journals to make this information available, and to include it in the record of Web of Science, would go some way to identifying at least one form of gaming. 

No doubt Prof Griffiths regards himself as a benign influence, helping an enthusiastic and energetic young person from a low-income country establish himself. He has an impressive number of collaborators from all over the world, many of whom have written in his support. Collaboration between those from very different cultures is generally to be welcomed. And I must stress that I have no objection to someone young and inexperienced making a scientific contribution - that is entirely in line with Mertonian norms of universalism. It is the departure from the norm of disinterestedness that concerns me. An obsession with 'publish or perish' leads to gaming of publications and citation counts as a way to get ahead. Research is treated like a game where the focus becomes one's own success rather than the advancement of science. This is a consequence of our distorted incentive structures and it has a damaging effect on scientific quality. It is frankly depressing to see such attitudes being inculcated in junior researchers from around the world.

 *Addendum

Letters co-authored by Mamun and Griffiths based on newspaper reports (from Web of Science). Self-cites refers to number of citations to articles by Griffiths and/or Mamun. Publication lags in square brackets calculated from date of submission and date of acceptance taken from the journal website. The script used to extract these dates, and .csv files with journal records, are available on: https://github.com/oscci/miscellaneous, see Faux Peer Review.rmd 

Mamun, MA; Griffiths, MD (2020) A rare case of Bangladeshi student suicide by gunshot due to unusual multiple causalities Asian Journal of Psychiatry, 49 10.1016/j.ajp.2020.101951 [5 self-cites, lag 6 days] 

Mamun, MA; Misti, JM; Griffiths, MD (2020) Suicide of Bangladeshi medical students: Risk factor trends based on Bangladeshi press reports Asian Journal of Psychiatry, 48 10.1016/j.ajp.2019.101905 [2 self-cites, lag 91 days] 

Mamun, MA; Chandrima, RM; Griffiths, MD (2020) Mother and Son Suicide Pact Due to COVID-19-Related Online Learning Issues in Bangladesh: An Unusual Case Report International Journal of Mental Health and Addiction, 10.1007/s11469-020-00362-5 [14 self-cites]

Bhuiyan, AKMI; Sakib, N; Pakpour, AH; Griffiths, MD; Mamun, MA (2020) COVID-19-Related Suicides in Bangladesh Due to Lockdown and Economic Factors: Case Study Evidence from Media Reports International Journal of Mental Health and Addiction, 10.1007/s11469-020-00307-y [10 self-cites]

Mamun, MA; Griffiths, MD (2020) Young Teenage Suicides in Bangladesh-Are Mandatory Junior School Certificate Exams to Blame? International Journal of Mental Health and Addiction, 10.1007/s11469-020-00275-3 [11 self-cites]

Mamun, MA; Siddique, A; Sikder, MT; Griffiths, MD (2020) Student Suicide Risk and Gender: A Retrospective Study from Bangladeshi Press Reports International Journal of Mental Health and Addiction, 10.1007/s11469-020-00267-3 [6 self-cites]

Griffiths, MD; Mamun, MA (2020) COVID-19 suicidal behavior among couples and suicide pacts: Case study evidence from press reports Psychiatry Research, 289 10.1016/j.psychres.2020.113105 [10 self-cites, lag161 days] 

Mamun, MA; Griffiths, MD (2020) PTSD-related suicide six years after the Rana Plaza collapse in Bangladesh Psychiatry Research, 287 10.1016/j.psychres.2019.112645 [0 self-cites, 2 days] 

 

Sunday, 16 August 2020

PEPIOPs – prolific editors who publish in their own publications

I recently reported on a method for identifying authors who contribute an unusually high proportion of papers to a specific journal. As I noted in my previous blogpost, one cannot assume any wrongdoing just on the grounds of a high publication rate, but when there is a close link between the author in question and the journal's editor, then this raises concerns about preferential treatment and integrity of the peer review process.

In running my analyses, I also found some cases where there wasn't just a close link between the most prolific author in a journal and the editor: they were one and the same person! At around the time I was unearthing these results, Elisabeth Bik tweeted to ask if anyone had examples of editors publishing in their own journals. I realised the analysis scripts I had developed for the 'percent by most prolific' analysis could be readily adapted to look at this question, and so I analysed journals in the broad domain of psychology and behavioural science from six publishers: Springer Nature, Wiley, Taylor and Francis, Sage, Elsevier and American Psychological Association (APA). I focused on those responsible for editorial decisions – typically termed Editor-in-Chief or Associate Editor. Sometimes this was hard to judge: in general, I included 'Deputy editors' with 'Editors-in-Chief' if they were very few in number. I ignored members of editorial boards.

Before reporting the findings, I thought I should consult more widely about whether people think it is appropriate for an editor to publish in their own journal.

As a starting point, I looked at guidelines that the Committee on Publication Ethics (COPE) has provided for editors  

Can editors publish in their own journal? 
While you should not be denied the ability to publish in your own journal, you must take extra precautions not to exploit your position or to create an impression of impropriety. Your journal must have a procedure for handling submissions from editors or members of the editorial board that will ensure that the peer review is handled independently of the author/editor. We also recommend that you describe the process in a commentary or similar note once the paper is published.

They link to a case report that notes how this issue can be particularly problematic if the journal is in a narrow field with few publication outlets, and the author is likely to be identifiable even if blinding is adopted.

I thought that it would be interesting to see what the broader academic community thought about this. In a Twitter poll I asked specifically about Editors-in-Chief:

The modal response was that it was OK for an Editor-in-Chief to publish in their own journal at a modest rate (once a year or less). There was general disapproval of editors publishing prolifically in their own journals – even though I emphasised that I was referring to situations where the editorial process was kept independent of the author, as recommended by COPE.

My poll question was not perfectly phrased - I did not specify the type of article: it is usual, for instance, for an editor to write editorials! But I later clarified that I meant regular articles and reviews, and I think most people interpreted it in that way.

The poll provoked some useful debate among those who approved and disapproved of editors publishing in their own journals.

Let's start with some of the reasons against this practice. I should lay my cards on the table and state that personally I am in agreement with Antonia Hamilton, who tweeted to say that as Editor in Chief of the Quarterly Journal of Experimental Psychology she would not be submitting papers from her lab to that journal during her term of office, citing concerns about Conflict of Interest. When I was Editor-in-Chief of Journal of Child Psychology and Psychiatry, I felt the same: I was concerned that my editorial colleagues would be put in a difficult position if they had to handle one of my papers, and if my papers were accepted, then it might be seen as involving a biased decision, even if it that was not the case. Chris Chambers, who was among the 34% who thought an Editor-in-Chief should never publish in their own journal, expressed concerns about breaches of confidentiality, given that the Editor-in-Chief would be able to access identities of reviewers. Others, though, questioned whether that was the case at all journals.

Turning to those who thought it was acceptable for an Editor-in-Chief to publish in their own journal, several people argued that it would be unfair, not just on the editor themselves, but also on members of their research group, if presence of an editorial co-author meant they could not submit to the journal. How much of an issue this is will depend, of course, on how big your research group is, and what other publication outlets are available.  Several people felt there was a big difference between cases where the editor was lead author, and those where they played a more minor role. The extreme case is when an editor is a member of a large research consortium led by someone else. It would seem unduly restrictive to debar a paper by a consortium of 100 researchers just because one of them was the Editor in Chief of the journal. It is worth remembering too that, while being a journal editor is a prestigious role, it is hard work, and publishers may be concerned that it becomes a seriously unattractive option if a major outlet for papers is suddenly off-limits.

Be this as it may, the impression from the Twitter poll was that three papers or more per annum starts to look excessive. In my analysis, shown here I found several journals where the Editor-in-Chief had coauthored 15 or more articles (excluding editorials/commentaries) in their own journal between 2015 and 2019.  I refer to these as PEPIOPs (prolific editors who publish in their own publications). For transparency, I've made my methods and results available, so that others can extend the analysis if they wish to do so: see https://github.com/oscci/Bibliometric_analyses. More legible versions of the tables are also available.

Table 1 shows the number of journals with a PEPIOP, according to publisher. 
Table 1: Number of journals with prolific authors. Columns AE and EIC indicate cases where Associate Editor or Editor-in-Chief published 15+ papers in the journal between 2015-2019
The table makes it clear that it is relatively rare for an Editor-in-Chief to be a PEPIOP: there were no cases for APA journals, and the highest number was 5.6% for Elsevier journals. Note that there are big differences between publishers in terms of how common it is for any author to publish 15+ papers in the journal over 5 years: this was true for around 25% of the Elsevier and Springer journals, and much less common for Sage, APA and Taylor & Francis. This probably  reflects the subject matter of the journals - prolific publication, often on multiauthored papers, is more common in biosciences than social sciences.

Individual PEPIOPs are shown in Table 2. An asterisk denotes and Editor-in-Chief who authored more articles in the journal than any other person between 2015-2019.

These numbers should be treated with caution – I did not check whether any of these editors had only recently adopted the editorial role - largely because this information is not easy to find from journal websites*. I looked at a small sample of papers from individual PEPIOPs to see if there was anything I had overlooked, but I haven't checked every article - as there a great many of them.  There was one case where this revealed misclassification: the editor of Journal of Paediatrics and Child Health wrote regular short commentaries of around 800 words summarising other papers: this seems an entirely unremarkable thing for an editor to do, but they were classified by Web of Science as 'articles' which led to him being categorised as a PEPIOP.  This illustrates how bibliometrics can be misleading.
Table 2: Editors-in-chief who published 15+ papers in own journal 2015-2019
In general I did not find any ready explanation for highly prolific outputs.  And I where I did spot checks I found only one case where there was an explanatory note of the kind COPE recommended (in the journal Maturitas) - on the contrary, in many cases there was a statement confirming no conflict of interest for any of the authors. Sometimes there were lists of conflicts relating to commercial aspects, but it was clear that authors did not regard their editorial role as posing a conflict. It was also worth mentioning there were cases where an Editor-in-Chief was senior author.

The more I have pondered this, the more I realise that the reason why I am concerned particularly by Editor-in-Chief PEPIOPs is because this is the person who is ultimately responsible for integrity of the publication process. Although publishers increasingly are alert to issues of research integrity, journal websites typically advise readers to contact the Editor-in-Chief if there is a problem. Complaints about editorial decisions, demands for retractions, or concerns about potential fraud or malpractice all come to the Editor-in-Chief. That's fine provided the Editor-in-Chief is a pillar of respectability. The problem is that not all editors are the paragons that we'd like them to be: one can adopt rather a jaundiced view after encountering editors who don't even bother to reply to expressions of concern, or adopt a secretive or dismissive attitude if plausible problems in their journal are flagged up. And there are also cases on record where an editor has abused the power of their position to enhance their own publication record. For this reason, I would strongly advise any Editor-in-Chief not to be a PEPIOP; it just looks bad, even if a robust, independent editorial process has been followed. We need to have confidence and trust in those who act as gatekeepers to journals.

My principal suggestion is that publishers could improve their record for integrity if they went beyond just endorsing COPE guidelines and started to actively check whether those guidelines are adhered to. 

*Note: 21st Aug 2020: A commenter on this blogpost has noted that Helai Huang fits this category - this editor has only been in post since 1 Jan 2020, so the period of prolific publication predated being an editor-in-chief.

Friday, 7 August 2020

Bishopblog catalogue (updated 7 August 2020)

Source: http://www.weblogcartoons.com/2008/11/23/ideas/

Those of you who follow this blog may have noticed a lack of thematic coherence. I write about whatever is exercising my mind at the time, which can range from technical aspects of statistics to the design of bathroom taps. I decided it might be helpful to introduce a bit of order into this chaotic melange, so here is a catalogue of posts by topic.

Language impairment, dyslexia and related disorders
The common childhood disorders that have been left out in the cold (1 Dec 2010) What's in a name? (18 Dec 2010) Neuroprognosis in dyslexia (22 Dec 2010) Where commercial and clinical interests collide: Auditory processing disorder (6 Mar 2011) Auditory processing disorder (30 Mar 2011) Special educational needs: will they be met by the Green paper proposals? (9 Apr 2011) Is poor parenting really to blame for children's school problems? (3 Jun 2011) Early intervention: what's not to like? (1 Sep 2011) Lies, damned lies and spin (15 Oct 2011) A message to the world (31 Oct 2011) Vitamins, genes and language (13 Nov 2011) Neuroscientific interventions for dyslexia: red flags (24 Feb 2012) Phonics screening: sense and sensibility (3 Apr 2012) What Chomsky doesn't get about child language (3 Sept 2012) Data from the phonics screen (1 Oct 2012) Auditory processing disorder: schisms and skirmishes (27 Oct 2012) High-impact journals (Action video games and dyslexia: critique) (10 Mar 2013) Overhyped genetic findings: the case of dyslexia (16 Jun 2013) The arcuate fasciculus and word learning (11 Aug 2013) Changing children's brains (17 Aug 2013) Raising awareness of language learning impairments (26 Sep 2013) Good and bad news on the phonics screen (5 Oct 2013) What is educational neuroscience? (25 Jan 2014) Parent talk and child language (17 Feb 2014) My thoughts on the dyslexia debate (20 Mar 2014) Labels for unexplained language difficulties in children (23 Aug 2014) International reading comparisons: Is England really do so poorly? (14 Sep 2014) Our early assessments of schoolchildren are misleading and damaging (4 May 2015) Opportunity cost: a new red flag for evaluating interventions (30 Aug 2015) The STEP Physical Literacy programme: have we been here before? (2 Jul 2017) Prisons, developmental language disorder, and base rates (3 Nov 2017) Reproducibility and phonics: necessary but not sufficient (27 Nov 2017) Developmental language disorder: the need for a clinically relevant definition (9 Jun 2018) Changing terminology for children's language disorders (23 Feb 2020) Developmental Language Disorder (DLD) in relaton to DSM5 (29 Feb 2020)

Autism
Autism diagnosis in cultural context (16 May 2011) Are our ‘gold standard’ autism diagnostic instruments fit for purpose? (30 May 2011) How common is autism? (7 Jun 2011) Autism and hypersystematising parents (21 Jun 2011) An open letter to Baroness Susan Greenfield (4 Aug 2011) Susan Greenfield and autistic spectrum disorder: was she misrepresented? (12 Aug 2011) Psychoanalytic treatment for autism: Interviews with French analysts (23 Jan 2012) The ‘autism epidemic’ and diagnostic substitution (4 Jun 2012) How wishful thinking is damaging Peta's cause (9 June 2014) NeuroPointDX's blood test for Autism Spectrum Disorder ( 12 Jan 2019)

Developmental disorders/paediatrics
The hidden cost of neglected tropical diseases (25 Nov 2010) The National Children's Study: a view from across the pond (25 Jun 2011) The kids are all right in daycare (14 Sep 2011) Moderate drinking in pregnancy: toxic or benign? (21 Nov 2012) Changing the landscape of psychiatric research (11 May 2014) The sinister side of French psychoanalysis revealed (15 Oct 2019)

Genetics
Where does the myth of a gene for things like intelligence come from? (9 Sep 2010) Genes for optimism, dyslexia and obesity and other mythical beasts (10 Sep 2010) The X and Y of sex differences (11 May 2011) Review of How Genes Influence Behaviour (5 Jun 2011) Getting genetic effect sizes in perspective (20 Apr 2012) Moderate drinking in pregnancy: toxic or benign? (21 Nov 2012) Genes, brains and lateralisation (22 Dec 2012) Genetic variation and neuroimaging (11 Jan 2013) Have we become slower and dumber? (15 May 2013) Overhyped genetic findings: the case of dyslexia (16 Jun 2013) Incomprehensibility of much neurogenetics research ( 1 Oct 2016) A common misunderstanding of natural selection (8 Jan 2017) Sample selection in genetic studies: impact of restricted range (23 Apr 2017) Pre-registration or replication: the need for new standards in neurogenetic studies (1 Oct 2017) Review of 'Innate' by Kevin Mitchell ( 15 Apr 2019) Why eugenics is wrong (18 Feb 2020)

Neuroscience
Neuroprognosis in dyslexia (22 Dec 2010) Brain scans show that… (11 Jun 2011)  Time for neuroimaging (and PNAS) to clean up its act (5 Mar 2012) Neuronal migration in language learning impairments (2 May 2012) Sharing of MRI datasets (6 May 2012) Genetic variation and neuroimaging (1 Jan 2013) The arcuate fasciculus and word learning (11 Aug 2013) Changing children's brains (17 Aug 2013) What is educational neuroscience? ( 25 Jan 2014) Changing the landscape of psychiatric research (11 May 2014) Incomprehensibility of much neurogenetics research ( 1 Oct 2016)

Reproducibility
Accentuate the negative (26 Oct 2011) Novelty, interest and replicability (19 Jan 2012) High-impact journals: where newsworthiness trumps methodology (10 Mar 2013) Who's afraid of open data? (15 Nov 2015) Blogging as post-publication peer review (21 Mar 2013) Research fraud: More scrutiny by administrators is not the answer (17 Jun 2013) Pressures against cumulative research (9 Jan 2014) Why does so much research go unpublished? (12 Jan 2014) Replication and reputation: Whose career matters? (29 Aug 2014) Open code: note just data and publications (6 Dec 2015) Why researchers need to understand poker ( 26 Jan 2016) Reproducibility crisis in psychology ( 5 Mar 2016) Further benefit of registered reports ( 22 Mar 2016) Would paying by results improve reproducibility? ( 7 May 2016) Serendipitous findings in psychology ( 29 May 2016) Thoughts on the Statcheck project ( 3 Sep 2016) When is a replication not a replication? (16 Dec 2016) Reproducible practices are the future for early career researchers (1 May 2017) Which neuroimaging measures are useful for individual differences research? (28 May 2017) Prospecting for kryptonite: the value of null results (17 Jun 2017) Pre-registration or replication: the need for new standards in neurogenetic studies (1 Oct 2017) Citing the research literature: the distorting lens of memory (17 Oct 2017) Reproducibility and phonics: necessary but not sufficient (27 Nov 2017) Improving reproducibility: the future is with the young (9 Feb 2018) Sowing seeds of doubt: how Gilbert et al's critique of the reproducibility project has played out (27 May 2018) Preprint publication as karaoke ( 26 Jun 2018) Standing on the shoulders of giants, or slithering around on jellyfish: Why reviews need to be systematic ( 20 Jul 2018) Matlab vs open source: costs and benefits to scientists and society ( 20 Aug 2018) Responding to the replication crisis: reflections on Metascience 2019 (15 Sep 2019) Manipulated images: hiding in plain sight (13 May 2020) Frogs or termites: gunshot or cumulative science? ( 6 Jun 2020)  

Statistics
Book review: biography of Richard Doll (5 Jun 2010) Book review: the Invisible Gorilla (30 Jun 2010) The difference between p < .05 and a screening test (23 Jul 2010) Three ways to improve cognitive test scores without intervention (14 Aug 2010) A short nerdy post about the use of percentiles (13 Apr 2011) The joys of inventing data (5 Oct 2011) Getting genetic effect sizes in perspective (20 Apr 2012) Causal models of developmental disorders: the perils of correlational data (24 Jun 2012) Data from the phonics screen (1 Oct 2012)Moderate drinking in pregnancy: toxic or benign? (1 Nov 2012) Flaky chocolate and the New England Journal of Medicine (13 Nov 2012) Interpreting unexpected significant results (7 June 2013) Data analysis: Ten tips I wish I'd known earlier (18 Apr 2014) Data sharing: exciting but scary (26 May 2014) Percentages, quasi-statistics and bad arguments (21 July 2014) Why I still use Excel ( 1 Sep 2016) Sample selection in genetic studies: impact of restricted range (23 Apr 2017) Prospecting for kryptonite: the value of null results (17 Jun 2017) Prisons, developmental language disorder, and base rates (3 Nov 2017) How Analysis of Variance Works (20 Nov 2017) ANOVA, t-tests and regression: different ways of showing the same thing (24 Nov 2017) Using simulations to understand the importance of sample size (21 Dec 2017) Using simulations to understand p-values (26 Dec 2017) One big study or two small studies? ( 12 Jul 2018)

Journalism/science communication
Orwellian prize for scientific misrepresentation (1 Jun 2010) Journalists and the 'scientific breakthrough' (13 Jun 2010) Science journal editors: a taxonomy (28 Sep 2010) Orwellian prize for journalistic misrepresentation: an update (29 Jan 2011) Academic publishing: why isn't psychology like physics? (26 Feb 2011) Scientific communication: the Comment option (25 May 2011)  Publishers, psychological tests and greed (30 Dec 2011) Time for academics to withdraw free labour (7 Jan 2012) 2011 Orwellian Prize for Journalistic Misrepresentation (29 Jan 2012) Time for neuroimaging (and PNAS) to clean up its act (5 Mar 2012) Communicating science in the age of the internet (13 Jul 2012) How to bury your academic writing (26 Aug 2012) High-impact journals: where newsworthiness trumps methodology (10 Mar 2013)  A short rant about numbered journal references (5 Apr 2013) Schizophrenia and child abuse in the media (26 May 2013) Why we need pre-registration (6 Jul 2013) On the need for responsible reporting of research (10 Oct 2013) A New Year's letter to academic publishers (4 Jan 2014) Journals without editors: What is going on? (1 Feb 2015) Editors behaving badly? (24 Feb 2015) Will Elsevier say sorry? (21 Mar 2015) How long does a scientific paper need to be? (20 Apr 2015) Will traditional science journals disappear? (17 May 2015) My collapse of confidence in Frontiers journals (7 Jun 2015) Publishing replication failures (11 Jul 2015) Psychology research: hopeless case or pioneering field? (28 Aug 2015) Desperate marketing from J. Neuroscience ( 18 Feb 2016) Editorial integrity: publishers on the front line ( 11 Jun 2016) When scientific communication is a one-way street (13 Dec 2016) Breaking the ice with buxom grapefruits: Pratiques de publication and predatory publishing (25 Jul 2017) Should editors edit reviewers? ( 26 Aug 2018) Corrigendum: a word you may hope never to encounter (3 Aug 2019) Percent by most prolific author score and editorial bias (12 Jul 2020)

Social Media
A gentle introduction to Twitter for the apprehensive academic (14 Jun 2011) Your Twitter Profile: The Importance of Not Being Earnest (19 Nov 2011) Will I still be tweeting in 2013? (2 Jan 2012) Blogging in the service of science (10 Mar 2012) Blogging as post-publication peer review (21 Mar 2013) The impact of blogging on reputation ( 27 Dec 2013) WeSpeechies: A meeting point on Twitter (12 Apr 2014) Email overload ( 12 Apr 2016) How to survive on Twitter - a simple rule to reduce stress (13 May 2018)

Academic life
An exciting day in the life of a scientist (24 Jun 2010) How our current reward structures have distorted and damaged science (6 Aug 2010) The challenge for science: speech by Colin Blakemore (14 Oct 2010) When ethics regulations have unethical consequences (14 Dec 2010) A day working from home (23 Dec 2010) Should we ration research grant applications? (8 Jan 2011) The one hour lecture (11 Mar 2011) The expansion of research regulators (20 Mar 2011) Should we ever fight lies with lies? (19 Jun 2011) How to survive in psychological research (13 Jul 2011) So you want to be a research assistant? (25 Aug 2011) NHS research ethics procedures: a modern-day Circumlocution Office (18 Dec 2011) The REF: a monster that sucks time and money from academic institutions (20 Mar 2012) The ultimate email auto-response (12 Apr 2012) Well, this should be easy…. (21 May 2012) Journal impact factors and REF2014 (19 Jan 2013)  An alternative to REF2014 (26 Jan 2013) Postgraduate education: time for a rethink (9 Feb 2013)  Ten things that can sink a grant proposal (19 Mar 2013)Blogging as post-publication peer review (21 Mar 2013) The academic backlog (9 May 2013)  Discussion meeting vs conference: in praise of slower science (21 Jun 2013) Why we need pre-registration (6 Jul 2013) Evaluate, evaluate, evaluate (12 Sep 2013) High time to revise the PhD thesis format (9 Oct 2013) The Matthew effect and REF2014 (15 Oct 2013) The University as big business: the case of King's College London (18 June 2014) Should vice-chancellors earn more than the prime minister? (12 July 2014)  Some thoughts on use of metrics in university research assessment (12 Oct 2014) Tuition fees must be high on the agenda before the next election (22 Oct 2014) Blaming universities for our nation's woes (24 Oct 2014) Staff satisfaction is as important as student satisfaction (13 Nov 2014) Metricophobia among academics (28 Nov 2014) Why evaluating scientists by grant income is stupid (8 Dec 2014) Dividing up the pie in relation to REF2014 (18 Dec 2014)  Shaky foundations of the TEF (7 Dec 2015) A lamentable performance by Jo Johnson (12 Dec 2015) More misrepresentation in the Green Paper (17 Dec 2015) The Green Paper’s level playing field risks becoming a morass (24 Dec 2015) NSS and teaching excellence: wrong measure, wrongly analysed (4 Jan 2016) Lack of clarity of purpose in REF and TEF ( 2 Mar 2016) Who wants the TEF? ( 24 May 2016) Cost benefit analysis of the TEF ( 17 Jul 2016)  Alternative providers and alternative medicine ( 6 Aug 2016) We know what's best for you: politicians vs. experts (17 Feb 2017) Advice for early career researchers re job applications: Work 'in preparation' (5 Mar 2017) Should research funding be allocated at random? (7 Apr 2018) Power, responsibility and role models in academia (3 May 2018) My response to the EPA's 'Strengthening Transparency in Regulatory Science' (9 May 2018) More haste less speed in calls for grant proposals ( 11 Aug 2018) Has the Society for Neuroscience lost its way? ( 24 Oct 2018) The Paper-in-a-Day Approach ( 9 Feb 2019) Benchmarking in the TEF: Something doesn't add up ( 3 Mar 2019) The Do It Yourself conference ( 26 May 2019) A call for funders to ban institutions that use grant capture targets (20 Jul 2019) Research funders need to embrace slow science (1 Jan 2020) Should I stay or should I go: When debate with opponents should be avoided (12 Jan 2020) Stemming the flood of illegal external examiners (9 Feb 2020) What can scientists do in an emergency shutdown? (11 Mar 2020) Stepping back a level: Stress management for academics in the pandemic (2 May 2020)
TEF in the time of pandemic (27 Jul 2020)

Celebrity scientists/quackery
Three ways to improve cognitive test scores without intervention (14 Aug 2010) What does it take to become a Fellow of the RSM? (24 Jul 2011) An open letter to Baroness Susan Greenfield (4 Aug 2011) Susan Greenfield and autistic spectrum disorder: was she misrepresented? (12 Aug 2011) How to become a celebrity scientific expert (12 Sep 2011) The kids are all right in daycare (14 Sep 2011)  The weird world of US ethics regulation (25 Nov 2011) Pioneering treatment or quackery? How to decide (4 Dec 2011) Psychoanalytic treatment for autism: Interviews with French analysts (23 Jan 2012) Neuroscientific interventions for dyslexia: red flags (24 Feb 2012) Why most scientists don't take Susan Greenfield seriously (26 Sept 2014) NeuroPointDX's blood test for Autism Spectrum Disorder ( 12 Jan 2019)

Women
Academic mobbing in cyberspace (30 May 2010) What works for women: some useful links (12 Jan 2011) The burqua ban: what's a liberal response (21 Apr 2011) C'mon sisters! Speak out! (28 Mar 2012) Psychology: where are all the men? (5 Nov 2012) Should Rennard be reinstated? (1 June 2014) How the media spun the Tim Hunt story (24 Jun 2015)

Politics and Religion
Lies, damned lies and spin (15 Oct 2011) A letter to Nick Clegg from an ex liberal democrat (11 Mar 2012) BBC's 'extensive coverage' of the NHS bill (9 Apr 2012) Schoolgirls' health put at risk by Catholic view on vaccination (30 Jun 2012) A letter to Boris Johnson (30 Nov 2013) How the government spins a crisis (floods) (1 Jan 2014) The alt-right guide to fielding conference questions (18 Feb 2017) We know what's best for you: politicians vs. experts (17 Feb 2017) Barely a good word for Donald Trump in Houses of Parliament (23 Feb 2017) Do you really want another referendum? Be careful what you wish for (12 Jan 2018) My response to the EPA's 'Strengthening Transparency in Regulatory Science' (9 May 2018) What is driving Theresa May? ( 27 Mar 2019) A day out at 10 Downing St (10 Aug 2019) Voting in the EU referendum: Ignorance, deceit and folly ( 8 Sep 2019) Harry Potter and the Beast of Brexit (20 Oct 2019) Attempting to communicate with the BBC (8 May 2020) Boris bingo: strategies for (not) answering questions (29 May 2020)

Humour and miscellaneous Orwellian prize for scientific misrepresentation (1 Jun 2010) An exciting day in the life of a scientist (24 Jun 2010) Science journal editors: a taxonomy (28 Sep 2010) Parasites, pangolins and peer review (26 Nov 2010) A day working from home (23 Dec 2010) The one hour lecture (11 Mar 2011) The expansion of research regulators (20 Mar 2011) Scientific communication: the Comment option (25 May 2011) How to survive in psychological research (13 Jul 2011) Your Twitter Profile: The Importance of Not Being Earnest (19 Nov 2011) 2011 Orwellian Prize for Journalistic Misrepresentation (29 Jan 2012) The ultimate email auto-response (12 Apr 2012) Well, this should be easy…. (21 May 2012) The bewildering bathroom challenge (19 Jul 2012) Are Starbucks hiding their profits on the planet Vulcan? (15 Nov 2012) Forget the Tower of Hanoi (11 Apr 2013) How do you communicate with a communications company? ( 30 Mar 2014) Noah: A film review from 32,000 ft (28 July 2014) The rationalist spa (11 Sep 2015) Talking about tax: weasel words ( 19 Apr 2016) Controversial statues: remove or revise? (22 Dec 2016) The alt-right guide to fielding conference questions (18 Feb 2017) My most popular posts of 2016 (2 Jan 2017) An index of neighbourhood advantage from English postcode data ( 15 Sep 2018) Working memories: A brief review of Alan Baddeley's memoir ( 13 Oct 2018)