Tuesday, 18 February 2020

Why eugenics is wrong

I really didn't think I would need to write a blogpost on this topic, but it seems eugenics is having a resurgence in the UK, so here we go.

The idea of eugenics deceptively simple. Given that some traits are heritable, we should identify those with beneficial heritable traits and encourage them to have more children, while discouraging (or even preventing) those with less desirable traits to breed. This way we will improve the human race.

Those promoting eugenics have decided that high intelligence is a desirable trait, and indeed it does correlate with things like good educational outcomes, earnings and health. It is also heritable. So wouldn't it be great to improve the human race by genetic selection for intelligence?

Much of the debate on this question has focussed on whether it could be done, rather than whether it should be. Many who would blench at enforced sterilisation have warmed to the suggestion that Polygenic Risk Scores can be used to predict educational attainment, and so could be used for embryo selection. However, those promoting this idea have exaggerated the predictive power of polygenic scores (see this preprint by Tim Morris for a recent analysis, and this review of Kevin Mitchell's book Innate for other links). But let us suppose for a moment that in the future we could predict an individual's intelligence from a genetic score: would it be acceptable then to use this information to improve the human race?

The flaw in the argument is exposed when you consider the people who are making it. Typically, they are people who do well on intelligence tests. In effect, they are saying "The world needs more people like us". Looking at those advocating eugenic policies, many of us would beg to differ.

The bad state of the world we live in is not caused by unintelligent people. It is caused by intelligent people who have used their abilities to amass disproportionate wealth, manipulate others or elbow them out of the way. Eugenicists should be especially aware that their advantages are due to luck rather than merit, yet they behave as if they deserve them, and fiercely protect them from "people not like us".

If we really wanted to use our knowledge of genetics to make the world a better place, we would select for the traits of kindness and tolerance. Rather than sterilising the unintelligent, we would minimise breeding by those who are characterised by greed and a sense of superiority over other human beings. But there's the catch: it's only those who think they're superior to others who actually want to implement eugenic policies.

Sunday, 9 February 2020

Stemming the flood of illegal external examiners

On 21st January, the Times Higher Education published a short piece about Professor Eric Barendt, an academic lawyer at UCL, who had been told that he had to submit his passport to another University in order to be acceptable as an external examiner. He thought this was preposterous, and declined to do so. The reaction on Twitter indicated that passport checks were now widespread in British universities, and many academics were unhappy about it 

My sympathies are with Prof Barendt, and I've decided that I too will not agree to be an external examiner if I am required to provide my passport to prove I am eligible. In fact, a few days after this story broke, I was invited to be an external examiner, and agreed only on condition that I did not have to provide my passport. Alas, it looks like this means I won't be examining the thesis.

This may look like petulance: refusal to comply with what is not an burdensome requirement creates difficulties for a blameless candidate and their supervisor. So let me explain why I think it is important.

External examining is a highly skilled, high-stakes, onerous task for which one is paid not much more than the minimum wage. The going rate varies from institution to institution, but in my recent experience you may get around £180 to £240. You have to read and evaluate a thesis that represents 3 years' worth of work (around 40,000-50,000 words in my discipline), visit the candidate's home institution to conduct an oral examination that lasts around 2-3 hours, ensure that any corrections are done to your satisfaction, and write a report with recommendations. Nobody does this for the money. Rather, like so much in academia, the whole system survives by a quid pro quo: you know that when your own students need examining, you'll want to find external examiners for them. With a strong student, examining can have its own intrinsic rewards, but it can also be highly stressful if there are problems with the thesis. So overall, all of the academics involved in this process know that the external examiner is doing a favour for another institution by agreeing to take on this extra job.

When I first did examining, many years ago, arrangements for selecting examiners were pretty informal. Times change, and everything has got more official and bureaucratic. Many institutions now require external examiners to provide proof of their competence to do the job (a CV and/or list of previous candidates examined), and some have guidelines to avoid too much chumminess between supervisor and external examiner (no co-authorships, for example). I can see that these requirements, have a point in preserving the integrity of the examination system.

But the passport check is really the last straw. It's senseless on two counts. First, it implies that academic institutions classify external examiners as employees, even though they are doing a one-off task for which the pay is trivial. Second, as Prof Barendt noted, it means that they don't trust other academic institutions to do proper checks of right to work. Now, it may be that there are some dodgy places where this is the case, but it seems reasonable to assume that Higher Education Institutions recognised by the Office for Students will be compliant with the law on this point. What is weird is that when I protest about the passport check for external examiners, some colleagues say, "But if the institution didn't do these checks, they'd be liable for enormous fines". Well, given that is the case, then surely it's safe to assume that the institution that actually employs the external examiner will have done the checks. I can understand that institutions might want an option of conducting checks in rare cases where there was reason to doubt this was true. It's the mandatory nature of the checks that are otiose in 99% of cases that is so exasperating.

Some years ago, in a different context, I wrote a piece about expansion of research regulation in academic life. Many of the points I made there apply to this situation. Bureaucracy creeps up on us by a series of stealthy small steps, until we suddenly find ourselves engulfed by it. Yes, showing a passport is a trivial matter, but I think that if we don't resist this kind of thing, it will only get worse.

P.S. Eric Barendt has pointed me to a piece he wrote on this topic for the Oxford Magazine (2020, No. 416, pp 8-10). I don't think this publication is available online, so here is just a short quote from it concerning the legal aspects of passport checks - something that has been discussed on Twitter in response to this blogpost.
An employer breaks the law only if it employs an illegal immigrant, not because it fails to conduct passport checks. If it is confident it is employing a UK national (or other person with a right to work in the UK such as an EEA or Swiss national), then it has nothing to worry about. So an automatic request is unnecessary. It reveals what may be termed a culture of ‘over-compliance’ with government policy. Of course, it is a sensible, indeed a vital, step to take, if a university, or indeed any employer, has doubts about the immigration status of anyone it is contemplating employing, but common sense surely suggests it is quite unwarranted when it engages someone whom it ought to trust.
Another issue tackled by Eric's piece is whether it is reasonable for Universities to treat external examiners as employees:
... it is hard to see why an external examiner, particularly of a doctoral thesis, should be treated as an employee of the host university, when an academic reviewer of a book proposal is not regarded as an employee of the publisher which engaged him (or her) to review it.
It has been suggested on Twitter that if we are to be regarded as employees, we should be paid an appropriate wage, and the post should be advertised!  

Sunday, 12 January 2020

Should I stay or should I go? When debate with opponents should be avoided

Suppose you are invited to speak at a conference where some of the other speakers have views very different from yours. What do you do? My guess is that most academics would say you should accept. After all, we progress by evaluating claims and counterclaims, and robust debate is the lifeblood of scientific research. I'm going to argue here that there are exceptions and explain why I think responsible scientists should avoid a meeting called "Fixing Science: Practical Solutions for the Irreproducibility Crisis".

To understand this reaction, it helps to have read Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming by Eric Conway and Naomi Oreskes (reviewed here). The synopsis from the book blurb is as follows:
The U.S. scientific community has long led the world in research on such areas as public health, environmental science, and issues affecting quality of life. Our scientists have produced landmark studies on the dangers of DDT, tobacco smoke, acid rain, and global warming. But at the same time, a small yet potent subset of this community leads the world in vehement denial of these dangers.

Merchants of Doubt tells the story of how a loose-knit group of high-level scientists and scientific advisers, with deep connections in politics and industry, ran effective campaigns to mislead the public and deny well-established scientific knowledge over four decades. Remarkably, the same individuals surface repeatedly - some of the same figures who have claimed that the science of global warming is "not settled" denied the truth of studies linking smoking to lung cancer, coal smoke to acid rain, and CFCs to the ozone hole. "Doubt is our product," wrote one tobacco executive. These 'experts' supplied it.
Uncertainty about science that threatens big businesses has been promoted by think tanks such as the Heartland Institute and Cato Institute, which receive substantial funding from those vested interests. The Fixing Science meeting has a clear overlap with those players.

The meeting first came to my attention when a mini Twitterstorm erupted after James Heathers tweeted:
Everyone's familiar with the manel, right? The all-male panel? Well, here's a whole new level for you.

I found myself wondering whether the lack of women was deliberate – maybe the organisers think you need a Y chromosome to do science – or whether it just showed they were tone-deaf to current social norms.

Once I twigged that this was organised by the National Association of Scholars, then everything fell into place. You can find the background to this organisation in their annual report here.

As several commentators have pointed out, they use the acronym NAS, which just happens to be the same as the highly respectable National Academy of Sciences: to avoid confusion here I will refer to them as NatAsSchols. The impression from their website and publications is that they are aligned with a neoliberal viewpoint and are opposed to attempts to increase diversity of race or gender in Universities.

So why is this organisation, whose mission is focused on issues such as preserving free speech and counteracting left-wing bias in Universities, running a meeting on fixing problems in science?

The NatAsSchols explains their interest in this topic as follows:
In April we published The Irreproducibility Crisis, a report on the modern scientific crisis of reproducibility—the failure of a shocking amount of scientific research to discover true results because of slipshod use of statistics, groupthink, and flawed research techniques. We launched the report at the Rayburn House Office Building in Washington, DC; it was introduced by Representative Lamar Smith, the Chairman of the House Committee on Science, Space, and Technology. This project signals our increasing commitment to address the academy’s flawed science as well as its abandonment of Western civilization and the liberal arts. We are following up The Irreproducibility Crisis with the investigation of four government agencies, including the Environmental Protection Agency. We are determined to find out just how badly irreproducible science has distorted government policy.
This makes it clear that the agenda is fundamentally a political one, designed to support the Trump administration's dismantling of environmental protections.

In 2018, Naomi Oreskes, author of Merchants of Doubt, wrote in Nature about a new 'Transparency Rule' proposed by the Environmental Protection Agency:
There is a crisis in US science, but it is not the one claimed by advocates for the rule. The crisis is the attempt to discredit scientific findings that threaten powerful corporate interests. The EPA is following a pattern that I and others have documented in regard to tobacco smoke, pollution, climate, and more. One tactic exploits the idea of scientific uncertainty to imply there is no scientific consensus. Another, seen in the latest efforts, insinuates that relevant research might be flawed. To add insult to injury, those using these tactics claim to be defending science.
February's meeting is in the same mould. The format of the meeting is cleverly constructed. The conference will be introduced and summed up by David J. Theroux (Founder, President and Chief Executive Officer of the Independent Institute and Publisher of The Independent Review) and Peter Wood, (President, NatAsSchols). Neither man has any scientific background. Theroux delighted the Heartland Institute last summer when he promoted the idea, recently publicised by Donald Trump, that wind turbines are responsible for killing numerous birds (to see this lampooned, click here)

Wood was an anthropologist who has been Provost at a small religious school, The King’s College in New York City (2005-2007), before moving to NatAsSchols. He has, as far as I can tell, no peer-reviewed publications, but he has written pieces deriding climate concerns, e.g. "the fantasies of global warming catastrophe are a kind of substitute religion, replete with a salvation doctrine, rituals of expiation, and a collection of demons to be cast out."

Another presenter is David Randall, who is Director of Research at NatAsSchols, policy advisor to the Heartland Institute and first author of the report on "The Irreproducibility of Modern Science". He is an unusual person to be authoring an authoritative report on the state of science. Web of Science turned up seven publications by him, all in politics journals, and none with any citations. His background is in history, library studies and fiction writing.

A rather puzzling choice of speaker is Richard K. Vedder, Distinguished Emeritus Professor of Economics, Ohio University and senior fellow at The Independent Institute, a think-tank founded by David Theroux. I could not find much evidence that he has shown any prior interest in science. He is Founding Director of the Center for College Affordability and Productivity in Washington, D.C and policy advisor to the Heartland Institute.

But there are also some accredited scientists on the programme, who can be divided into two camps. First, we have a set of five speakers who are aligned with NatAsSchols and/or the Heartland Institute and who have unconventional views on subjects such as climate change, pollution and gay relationships:

Elliott D. Bloom is Professor Emeritus at the Kavli Institute for Particle Astrophysics and Cosmology in the Stanford Linear Accelerator Laboratory (SLAC) and a Fellow of the American Physical Society. He has an entry on the Independent Institute website which states: "He was a member of the SLAC team with Jerome I. Friedman, Henry W. Kendall and Richard E. Taylor who received the 1990 Nobel Prize in Physics." I thought this meant he was a Nobel Laureate, but he's not listed as one. Nevertheless, he has a strong publication record in Physics. He has co-authored a presentation on "Global Warming: Fact or Fiction?", which concludes that the sun, rather than CO2 is the principal driver of climate change.

Anastasios Tsonis is Emeritus Distinguished Professor, Department of Mathematical Sciences, Atmospheric Sciences Group, University of Wisconsin Milwaukee; and Adjunct Research Scientist, Hydrologic Research Center, San Diego, California. He has worked on mathematical models of atmospheric processes and has a strong set of publications. He is a member of the academic advisory council of the Global Warming Policy Forum, a think tank founded by Nigel Lawson to combat policies designed to mitigate climate change.

Patrick J. Michaels, Senior Fellow, Competitive Enterprise Institute has a Wikipedia entry that states that "he was a senior fellow in environmental studies at the Cato Institute until Spring 2019. Until 2007 he was research professor of environmental sciences at the University of Virginia, where he had worked from 1980." Michaels also has an entry in the Website of the Heartland Institute 

Louis Anthony Cox is Professor, Department of Biostatistics and Informatics, University of Colorado and President of Cox Associates, a Denver-based applied research company specializing in quantitative risk analysis, causal modeling, advanced analytics, and operations research. He has a long list of publications. A Google search turns up an article in the Los Angeles Times which states:
The Trump administration’s reliance on industry-funded environmental specialists is again coming under fire, this time by researchers who say that Louis Anthony 'Tony' Cox Jr., who leads a key Environmental Protection Agency advisory board on air pollution, is a 'fringe' scientist and ideologue pushing policies detrimental to public health.
They refer to this paper in Science, which stated that Cox ignored consensus viewpoints on the effects of smog and particulate pollution. His work has also been criticised for its conflict with corporate interests.

Mark Regnerus, Professor, Sociology Department, University of Texas at Austin has a Wikipedia page which notes the controversy around his research on the adverse impact of a child having a parent who has been involved in a same-sex relationship. The research is funded by the Witherspoon Institute, a conservative think tank. Regnerus also contributed to an amicus brief in opposition to same-sex marriage. A sympathetic account of the controversy was published by the NatAsSchols .

Of the remaining 11 speakers, as far as I can see only one, Barry Smith (University at Buffalo), has any formal links with NatAsSchols. With a few exceptions they are psychologists/philosophers/statisticians with a specific interest in scientific reproducibility. Interest in this topic has been growing exponentially over the past 10 years, and, in general, those engaged in research in this area do so with the aim of improving scientific transparency and practice. However, they run the risk that their agenda can be weaponised to cast doubt on any particular part of scientific research that is politically or commercially inconvenient.

They will serve perfectly as foils to the five speakers whose minority views on climate/pollution/sexuality will not have to face questioning by anyone with deep expertise in those areas. I have no doubt that the reproducibility experts will have lively debate among themselves as to how the irreproducibility crisis should be fixed, in the process achieving the useful (to the organisers) goal of emphasising just what an unreliable and uncertain business science is.

Should they agree to speak at this meeting? As @briandavidearp, remarked on Twitter: "I'm wary of deliberately failing to engage/interact w/ people or organizations on the basis that they have diff moral or political commitments than me. That way balkanization and polarization lies."

That's an answer I would have agreed with a few years ago – after all, isn't that what academic life is all about? We should not just sit in our own bubble; rather we should engage respectfully with those who have different views. But this is really not about regular scientific debate. It's about weaponising the reproducibility debate to bolster the message that everything in science is uncertain – which is very convenient for those who wish to promote fringe ideas.

My view is that many of the speakers at this meeting are being played. On the one hand their presence on the programme may encourage other to agree to participate, and give false reassurance to attendees that this is a regular conference. And on the other, they will find that their arguments are scooped up by the Merchants of Doubt and used to argue that science is so uncertain that we should not accept the consensus view. We cannot be sure whether anthropogenic climate change is exaggerated, whether pollution is not really harmful, and whether gay relationships are damaging. Those who are concerned to see such ideas promoted without any debate between experts in those areas may wish to reconsider whether this meeting is really about 'Fixing Science', or whether it is rather about 'Fitting up Scientists'.

15th January 2020
I thank Lee Jussim for engaging in the comments below. I can see that, given his understanding of the situation, it would make sense to take part in the meeting. But his understanding is different from mine, and I want to add this PS to clarify what I'm saying.

It was perhaps a mistake for me to note the neoliberal affiliations of NatAsSchols, as this appears to have given Lee the impression that my objections to the Fixing Science meeting is based on disapproval of talking to those with right-wing beliefs. I am myself on the left politically, but I agree with Brian Earp that, insofar as it is possible to do so in good faith, we should engage with those with differing views. If the meeting consisted solely of experts in philosophy of sciences/methods/metascience, with different political persuasions, I would not be warning people off – quite the contrary.

Indeed, the people I identified as belonging to the second group of speakers would seem to be exactly such a group. As I noted, I doubt that they will come to a consensus about how to 'Fix Science', but a good mix of views and perspectives is represented. No doubt Lee and others will discuss an issue that is of particular interest to me, which is how our social and cognitive biases affect how we evaluate evidence (see Bishop, 2020). Such biases are not specifically associated with left- or right-leaning politics – they affect all of us.

The problem I have with the meeting is not that the organisers are right-wing, but rather that their organisation's goals are linked to issues around higher education, and they have no credentials in science, yet they fervently advocate minority views about such topics as climate change.  Consider how bizarre it would be if, for instance, the Psychonomics Society declared that it planned to hold a meeting on 'Fixing Politics'. The NatAsScholars just doesn't have credibility in the area of scientific pratices. Alas, what they do have instead are links with funders whose vast wealth is used to attack science that threatens their vested interests. In this respect, I think the argument that 'the left-wingers are just as bad' breaks down.

But, I reiterate, the main point is not whether NatAsSchlols is left- or right-wing. It's the weird structuring of the meeting, which juxtaposes a set of experts in the 'reproducibility crisis' with a set of individuals who promote scientific views that are far from mainstream. The fact that the topics are ones that are supported by the Heartland Institute is telling, but the same strategy could in principle be used with any fringe view. Suppose you were a sceptic about evolution or vaccination, or a believer in pre-cognition. You know your arguments would not survive scrutiny by experts familiar with evidence in the area, so you don't invite those (and to be fair, it's unlikely that they'd come anyway, as there are diminishing returns in engaging with those whose minds are fixed). But what you can do is to cast doubt on all scientific evidence by inviting along those who are questioning the solidity and credibility of current scientific practices. That's what is happening here.

The general strategy has been in use for years, as documented by Conway and Oreskes, and applied to diverse topics such as tobacco dangers and acid rain, as well as climate change. The Merchants of Doubt love it when scientists themselves disagree about the nature of evidence, because it gives them a get-out-of-jail-free card.

I'm firmly of the belief we should not shove problems with science under the carpet: we need to understand the nature and extent of such problems in order to fix them. But it is a mistake to engage with those who want to exploit the presence of uncertainty to give credibility to their fringe views.

Bishop, D. V. M. (2020). The psychology of experimental psychologists: Overcoming cognitive constraints to improve research. The 45th Sir Frederic Bartlett Lecture Quarterly Journal of Experimental Psychology, 73(1), 1-19. doi:10.1177/1747021819886519

Wednesday, 1 January 2020

Research funders need to embrace slow science

Uta Frith courted controversy earlier this year when she published an opinion piece in which she advocated for Slow Science, including the radical suggestion that researchers should be limited in the number of papers they publish each year. This idea has been mooted before, but has never taken root: the famous Chaos in the Brickyard paper by Bernard Forscher dates back to 1963, and David Colquhoun has suggested restricting the number of publications by scientists as a solution more than once on his blog (here and here).

Over the past couple of weeks I've been thinking further about this, because I've been doing some bibliometric searches. This was in part prompted by the need to correct and clarify an analysis I had written up in 2010, about the amount of research on different neurodevelopmental disorders. I had previously noted the remarkable amount of research on autism and ADHD compared to other behaviourally-defined conditions. A check of recent databases shows no slowing in the rate of research. A search for publications with autism or autistic in the title yielded 2251 papers published in 2010; in 2019, this has risen to 6562. We can roughly halve this number if we restrict attention to the Web of Science Core database and search only for articles (not reviews, editorials, conference proceedings etc). This gives 1135 articles published in 2010 and 3075 published in 2019. That's around 8 papers every day for 2019.

We're spending huge amounts to generate these outputs. I looked at NIH Reporter, which provides a simple interface where you can enter search terms to identify grants funded by the National Institutes of Health. For the fiscal year 2018-2019 there were 1709 projects with the keyword 'autism or autistic', with a total spend of $890 million. And of course, NIH is not the only source of research funding.

Within the field of developmental neuropsychology, autism is the most extreme example of research expansion, but if we look at adult disorders, this level of research activity is by no means unique. My searches found that this year there were 6 papers published every day on schizophrenia, 15 per day on depression, and 11 per day on Alzheimer's disease.

These are serious and common conditions and it is right that we fund research into them – if we could improve understanding and reduce their negative impacts, it would make a dramatic difference to many lives. The problem is information overload. Nobody, however nerdy and committed, could possibly keep abreast of the literature. And we're not just getting more information, the information is also more complex. I reckon I could probably understand the majority of papers on autism that were published when I started out in research years ago. That proportion has gone down and down with time, as methods get ever more complex. So we're spending increasing amounts of money to produce more and more research that is less and less comprehensible. Something has to give, and I like the proposal that we should all slow down.

But is it possible? If you want to get your hands on research funding, you need to demonstrate that you're likely to make good use of it. Publication track record provides objective evidence that you can do credible research, so researchers are focused on publishing papers. And they typically have a short time-frame in which to demonstrate productivity.

A key message from Uta's piece is that we need to stop confusing quantity with quality. When this topic has been discussed on social media, I've noted that many ECRs take the view that when you come to apply for grants or jobs, a large number of publications is seen as a good thing, and therefore Slow Science would damage the prospects of ECRs. That is not my experience. It's possible that there are differences in practice between different countries and subject areas, but in the UK the emphasis is much more on quality than quantity of publications, so a strategy of focusing on quality rather than quantity would be advantageous. Indeed, most of our major funders use proposal forms that ask applicants to list their N top publications, rather than a complete CV. This will disadvantage anyone who has sliced their oeuvre into lots of little papers, rather than writing a few substantial pieces. Similarly, in the UK Research Excellence Framework, researchers from an institution are required to submit their outputs, but there is a limited number that can be submitted – a restriction that was introduced many years ago to incentivise a focus on quality rather than quantity.

The same people who are outraged at reducing the number of publications often rail against the stress of working in the current system – and rightly so. After all, at some point in the research cycle, at least one person has to devote serious thought to the design, analysis and write-up. Each of these stages inevitably takes far longer than we anticipate – and then there is time needed to respond to reviewers. The quality and impact of research can be enhanced by pre-registration and making scripts and data open, but extra time needs to be budgeted for this. Indeed, lack of time is a common reason cited for not doing open science. Researchers who feel that to succeed they have to write numerous papers every year are bound to cut corners, and then burn out from stress. It makes far more sense to work slowly and carefully to produce a realistic number of strong papers that have been carefully planned, implemented and written up.

It's clear that science and scientists would benefit if we take things more slowly, but the major barrier is a lack of researcher confidence. Those who allocate funds to research have a vested interest in ensuring they get the best return from their investment – not a tsunami of papers that overwhelm us, but a smaller number of reports of high-quality, replicable and credible findings. If things are to change we need funders to do more to communicate to researchers that they will be evaluated on quality rather than quantity of outputs.

Sunday, 20 October 2019

Harry Potter and the Beast of Brexit

The People's Vote march yesterday was a good opportunity to catch up on family news with my brother, Owen, as we shuffled along from Hyde Park to Parliament Square. We were joined by my colleague Amy Orben, appropriately kitted out in blue and yellow.

Amy and Owen, People's March, Oct 19 2019
As with previous anti-Brexit marches, the atmosphere was friendly, many of the placards were jokey, and the event was enlivened by the presence of unexpected groups who reinforced our view that the UK is a friendly, inclusive, if somewhat eccentric place. I saw no signs of aggression, except right at the end, when a group of thuggish young men stood outside a pub by Trafalgar Square, shouting insults at the marchers.
Morris not Boris dancers
But the underlying mood was sombre. There was a sense of inevitability that Brexit was going to be pushed through, regardless of the damage done to individuals and to the country. Propaganda has won. The 'Will of the People' is used to justify us accepting a bad deal. The phrase is seldom challenged by journalists, who allow interviewees to trot it out, alongside the mantra, 'Respect the Referendum'.

But it's nonsense, of course. The deal that Johnson has achieved respects nothing and nobody. It bears no relation to what the 52% voted for. Many people now realise they were conned by the pre-referendum propaganda, which promised a Brexit that would fix all kinds of problems – underfinancing of the NHS, immigration, housing, jobs, even climate change. As Sadiq Khan memorably said, nobody voted to be poorer. And few people would think the break-up of the United Kingdom is a reasonable price to pay for Brexit. It would take just 5% of Leavers to change their vote to Remain to change the outcome to favour Remain.

Even so, I'm not confident that another Referendum would lead to success for Remain. The problem is that Johnson and his cronies use dishonesty as a weapon. I feel like a character in a Harry Potter novel, where the good people are put at a disadvantage because they take ethical issues seriously.  That's why it's so important to hold our politicians to high standards: we have the Nolan principles of public life, but they lack teeth because they are just ignored. Meanwhile, those who want to preserve the country that we were proud to be part of – the one that came together so magnificently for the London Olympics in 2012 – aren't good at propaganda. Imagine if we had someone with the talent of Dominic Cummings fighting on our side: a propagandist who, instead of promoting fear and hatred, could manipulate people's opinions to make them feel pride and pleasure in being part of an inclusive, intelligent, peace-loving nation. Instead, those opposed to Brexit are divided, and show no signs of understanding how to campaign effectively – always put on the back foot. When we discuss the contents of Operation Yellowhammer, we are told this is Project Fear: an official government report is dismissed as Remain propaganda. Rather than making a pro-active case for remaining in the EU, we are manipulated into defending ourselves against preposterous accusations.

Despite the jokes and banter, the people marching yesterday were angry. We are angry to see our country wrecked for no good reason. I could put up with taking a personal hit to my standard of living if I could see that it benefited others – indeed I regularly vote for parties that propose higher taxation for people like me. The thing that is hard to stomach is the absence of coherent answers when you ask a Leaver about the benefits that will ensue after Brexit. I'm a rational person, and Brexit seems totally irrational – harming so many sectors of society while benefitting only the vulture capitalists. Meanwhile, on the international stage, our competitors and enemies must be enjoying the spectacle of seeing the EU being weakened, as we engage in this act of self-harm.

In the right-hand column below, are potential benefits of Brexit that have been put forward by the few people who actually engage when asked why they want to leave. In the left-hand column, I list risks of Brexit that are, as far as I am aware, adequately documented by people with expertise in these areas. Some of these, such as supply problems, are more relevant to no-Deal Brexit; others apply more broadly. There are dependencies between some of these: damage to farming, social care, NHS, science and Higher Education is a consequence of loss of EU workers: both from reluctance to live in a xenophobic country, and from legal restrictions on their employment here.  Disclaimer: I'm not an expert in politics and economics and I'd be glad to modify or add to the table if people can come up with well-evidenced arguments for doing so*.
My analysis of risks and benefits of Brexit
*Owen has commented on this (see below)
J.K. Rowling was prescient in her novels, which vividly described the triumph of propaganda over reason, of violence over peace, of the bully over the meek. With the Beast of Brexit, exemplified by Boris Johnson and his cronies, we see all these themes being played out in real life.

It is particularly galling when politicians argue that we have to have Brexit because otherwise there will be riots. In effect, this is saying that those who marched yesterday are to be ignored because they aren't violent. Of course, there are exceptions: I gather that it was not only Remain politicians who had to run the gauntlet of an angry crowd yesterday. Jacob Rees-Mogg was also verbally abused by a group of Remainers. I'm glad to say I have seen nobody defending such behaviour by either side. But politicians should not underestimate the genuine anger that is felt by Remainers, when people like Rees-Mogg claim in the Spectator that 'Everyone is saying “Just get on with it.” Moderate Remainers and Leavers alike are saying: “For goodness sake, please just finish it off.”’ One would hope that the thousands of moderate, peaceful marchers yesterday might disabuse him of that idea, yet I'm sure he'll continue to make these specious claims. Meanwhile, we are excluded from 'the People', are told we are undemocratic because we want a vote, and that we'd only be taken seriously if we started rioting.

I was particularly depressed to hear that some politicians had said they would support Boris Johnson's deal because they had received death threats from constituents. Have we really come to this? Are politicians saying to the people who marched yesterday that we'll only be listened to if we threaten to kill our opponents? Once we get to that point, we have lost all that is great about Britain. It is feeling perilously close.

Tuesday, 15 October 2019

The sinister side of French psychoanalysis revealed

Peak pseudoprofound bullshit* from Jacques Lacan; a proof that Woman does not exist
Sophie Robert, who created controversy in 2011 with her film 'Le Mur', has now produced a sequel, 'Le Phallus et le Neant'**, which extends her case against the dominance of psychoanalysis in French culture. In brief, the film makes the following points:
  1. Psychoanalysts enjoy a celebrity status in France as public intellectuals. 
  2. Their views are based heavily on writings of Sigmund Freud, Jacques Lacan and Françoise Dolto, and are not intellectually or scientifically coherent. 
  3. They promote ideas that are misogynistic and homophobic, and view small children as having erotic interest in their parents. Some of their statements appear to justify paedophilia and incest. 
  4. They do not see their role as helping cure psychiatric disorders. 
  5. They have a financial interest in maintaining their status quo. 
  6. Some of them work with vulnerable populations of children in institutions, which is especially troubling given point 3.
Le Mur focused on psychoanalytic treatment for autism (transcript available here); the new film has some overlap but is more structured around developing points 1-6, and raises further questions about the popularity of psychoanalysis for treatment of adult mental health problems in France. Although Robert notes at the outset that there are good practitioners who can help those who consult them, the overall message is that there are many analysts who do active harm to their clients, while charging them large sums of money. There appears to be no regulatory oversight of their activities.

Le Phallus et le Neant is a 2 hour-long film, and I recommend watching it in full; I started by finding the analysts merely irritating and pretentious, but as the film developed, it became increasingly disturbing. The last quarter included interviews with women who had suffered sexual abuse as children, and who were told they should not see themselves as victims.

Here are just a few clips to illustrate the content of the interviews with analysts.

Much of the first part of the film focuses on the negative views of Woman proposed by Freud and Lacan. Penis envy is taken extremely seriously.
Relationships between parents and their children are seen as complicated and problematic:

A cheerful and positive attitude to sex seems unattainable:

Regarding homosexuality, the film notes the influence of the late Andre Green, who according to Wikipedia was 'one of the most important psychoanalytic thinkers of our times'. Green regarded homosexuality as a psychosis. Confronted with evidence of well-balanced and contented gay men, he claimed they were psychotics-in-denial, apparently healthy but likely to fall prey to insanity at any time. Sophie probed her interviewees about this, and they looked cagey, particularly when asked if there were any gay psychoanalysts. The idea of gay couples as parents has been highly contentious in France: if we believed the psychoanalysts, this would be a disaster. In fact, as shown by the work of Susan Golombok and colleagues, it isn't anything of the kind.

If you argue against the views of the analysts, by saying you never wanted a penis, you had a loving but unerotic relationship with your parents, and you find adult sex fun, then this is treated as evidence of the depth of your repression, rather than evidence for the invalidity of the theory.

The late Françoise Dolto had a major influence on psychoanalytic thought in France. Her claims  that children have desire towards adults, and trap them because of this, were reflected at several points in the interviews.
And given these provocative children, it seems that a father who commits incest with his child is really only doing what comes naturally:

A final point is the mismatch between the expectations of clients and what the psychoanalyst offers. One assumes that most people go into analysis with the expectation that it will help them: after all, they invest a great deal of time and money in the process. But that does not seem to be the view of the analysts. Their attitude seems to be that the human situation is pretty hopeless, because what people want (sex with a parent) is not possible, and the best they can do is to help us come to realise that:

* This term is taken from Pennycook, G., Cheyne, J. A., Barr, N., Koehler, D. J., & Fugelsang, J. A. (2015). On the reception and detection of pseudo-profound bullshit. Judgment and Decision Making, 10, 549-563.

**A version of the film with English subtitles is publicly available on Vimeo at a cost of €4 . Conflict of interest statement: I contributed to the funding of the film, but I will donate any royalties to charity.

Wednesday, 9 October 2019

Attempting to communicate with the BBC: it's not just politicians who refuse to answer questions

A couple of weeks ago there was an outburst of public indignation after it emerged that the BBC had censured their presenter Naga Munchetty. As reported by the Independent, in July BBC Breakfast reported on comments made by President Trump to four US congresswomen, none of whom was white, whom he told to "go back and help fix the totally broken and crime-infested places from which they came." Naga commented "Every time I've been told as a woman of colour to 'go home', to 'go back to where I've come from', that was embedded in racism."

Most of the commentary at the time focused on whether or not Naga had behaved unprofessionally in making the comment, or whether she was justified in describing Trump's comment as racist. The public outcry has been heard: the Director General of the BBC, Tony Hall, has since overturned the decision to censure her.

There is, however, another concern about the BBC's action, which is why did they choose to act on this matter in the first place. All accounts of the story talk of 'a complaint'. The BBC complaints website explains that they can get as many as 200,000 complaints every year, which averages out at 547 a day. Now, I would have thought that they might have some guidelines in place about which complaints to act upon. In particular, they would be expected to take most seriously issues about which there were a large number of complaints. So it seems curious, to say the least, if they had decided to act on a single complaint, and I started wondering whether it had been made by someone with political clout.

The complaints website allows you to submit a complaint or to make a comment, but not to ask a question, but I submitted some questions anyhow through the complaints portal, and this morning I received a response, which I append in full below. Here are my questions and the answers:

Q1. Was there really just ONE complaint?
BBC: Ignored

Q2: If yes, how often does the BBC complaints department act on a SINGLE complaint?
BBC: Ignored

Q3: Who made the complaint?
BBC: We appreciate you would like specific information about the audience member who complained about Naga's comments but we can't disclose details of the complainant, but any viewer or listener can make a complaint and pursue it through the BBC's Complaints framework.

Q4: If you cannot disclose identity of the complainant, can you confirm whether it was anyone in public life?
BBC: Ignored

Q5: Can you reassure me that any action against Munchetty was not made because of any political pressure on the BBC?
BBC: Ignored

I guess the BBC are so used to politicians not answering questions that they feel it is acceptable behaviour. I don't, and I treat evasion as evidence of hiding something they don't want us to hear. I was interested to see that Ofcom is on the case, but have been fobbed off just as I was. Let's keep digging. I smell a large and ugly rat.

Here is the full text of the response:

Dear Prof Bishop
Reference CAS-5652646-TNKKXL
Thank you for contacting the BBC.
I understand you have concerns about the BBC Complaints process specifically with regard to a complaint made regarding Breakfast presenter Naga Munchetty and comments about US President Trump. 
Naturally we regret when any member of our audience is unhappy with any aspect of what we do. We treat all complaints seriously, but what matters is whether the complaint is justified and the BBC acted wrongly. If so we apologise. If we don’t agree that our standards or public service obligations were breached, we try to explain why. We appreciate you would like specific information about the audience member who complained about Naga's comments but we can't disclose details of the complainant, but any viewer or listener can make a complaint and pursue it through the BBC's Complaints framework.
Nonetheless, I understand this is something you feel strongly about and I’ve included your points on our audience feedback report that is sent to senior management each morning and ensures that your complaint has been seen by the right people quickly. 
We appreciate you taking the time to register your views on this matter as it is greatly helpful in informing future decisions at the BBC.
Thanks again for getting in touch.
Kind regards
John Hamill
BBC Complaints Team

Tuesday, 10 September 2019

Responding to the replication crisis: reflections on Metascience2019

Talk by Yang Yang at Metascience 2019
I'm just back from MetaScience 2019. It was an immense privilege to be invited to speak at such a stimulating and timely meeting, and I would like to thank the Fetzer Franklin Fund, who not only generously funded the meeting, but also ensured the impressively smooth running of a packed schedule. The organisers - Brian Nosek, Jonathan Schooler, Jon Krosnick, Leif Nelson and Jan Walleczek - did a great job in bringing together speakers on a range of topics, and the quality of talks was outstanding. For me, highlights were hearing great presentations from people well outside my usual orbit, such as Melissa Schilling on 'Where do breakthrough ideas come from?', Carl Bergstrom on modelling grant funding systems, and Callin O'Connor on scientific polarisation.

The talks were recorded, but I gather it may be some months before the film is available. Meanwhile, slides of many of the presentations are available here, and there is a copious Twitter stream on the hashtag #metascience2019. Special thanks are due to Joseph Fridman (@joseph_fridman): if you look at his timeline, you can pretty well reconstruct the entire meeting from live tweets. Noah Haber (@NoahHaber) also deserves special mention for extensive commentary, including a post-conference reflection starting here.  It is a sign of a successful meeting, I think, if it gets people, like Noah, raising more general questions about the direction the field is going in, and it is in that spirit I would like to share some of my own thoughts.

In the past 15 years or so, we have made enormous progress in documenting problems with credibility of research findings, not just in psychology, but in many areas of science. Metascience studies have helped us quantify the extent of the problem and begun to shed light on the underlying causes. We now have to confront the question of what we do next. That would seem to be a no-brainer: we need to concentrate on fixing the problem. But there is a real danger of rushing in with well-intentioned solutions that may be ineffective at best or have unintended consequences at worst.

One question is whether we should be continuing with a focus on replication studies. Noah Haber was critical of the number of talks that focused on replication, but I had a rather different take on this: it depends on what the purpose of a replication study is. I think further replication initiatives, in the style of the original Reproducibility Project, can be invaluable in highlighting problems (or not) in a field. Tim Errington's talk about the Cancer Biology Reproducibility Project demonstrated beautifully how a systematic attempt to replicate findings can reveal major problems in a field. Studies in this area are often dependent on specialised procedures and materials, which are either poorly described or unavailable. In such circumstances it becomes impossible for other labs to reproduce the methods, let alone replicate the results. The mindset of many researchers in this area is also unhelpful – the sense is that competition dominates, and open science ideals are not part of the training of scientists. But these are problems that can be fixed.

As was evident from my questions after the talk, I was less enthused by the idea of doing a large, replication of Darryl Bem's studies on extra-sensory perception. Zoltán Kekecs and his team have put in a huge amount of work to ensure that this study meets the highest standards of rigour, and it is a model of collaborative planning, ensuring input into the research questions and design from those with very different prior beliefs. I just wondered what the point was. If you want to put in all that time, money and effort, wouldn't it be better to investigate a hypothesis about something that doesn't contradict the laws of physics? There were two responses to this. Zoltán's view was that the study would tell us more than whether or not precognition exists: it would provide a model of methods that could be extended to other questions. That seems reasonable: some of the innovations, in terms of automated methods and collaborative working could be applied in other contexts to ensure original research was done to the highest standards. Jonathan Schooler, on the other hand, felt it was unscientific of me to prejudge the question, given a large previous literature of positive findings on ESP, including a meta-analysis. Given that I come from a field where there are numerous phenomena that have been debunked after years of apparent positive evidence, I was not swayed by this argument. (See for instance this blogpost on 5-HTTLPR and depression). If the study by Kekecs et al sets such a high standard that the results will be treated as definitive, then I guess it might be worthwhile. But somehow I doubt that a null finding in this study will convince believers to abandon this line of work.

Another major concern I had was the widespread reliance on proxy indicators of research quality. One talk that exemplified this was Yang Yang's presentation on machine intelligence approaches to predicting replicability of studies. He started by noting that non-replicable results get cited just as much as replicable ones: a depressing finding indeed, and one that motivated the study he reported. His talk was clever at many levels. It was ingenious to use the existing results from the Reproducibility Project as a database that could be mined to identify characteristics of results that replicated. I'm not qualified to comment on the machine learning approach, which involved using ngrams extracted from texts to predict a binary category of replicable or not. But implicit in this study was the idea that the results from this exercise could be useful in future in helping us identify, just on the basis of textual analysis, which studies were likely to be replicable.

Now, this seems misguided on several levels. For a start, as we know from the field of medical screening, the usefulness of a screening test depends on the base rate of the condition you are screening for, the extent to which the sample you develop the test on is representative of the population, and the accuracy of prediction. I would be frankly amazed if the results of this exercise yielded a useful screener. But even if they did, then Goodhart's law would kick in: as soon as researchers became aware that there was a formula being used to predict how replicable their research was, they'd write their papers in a way that would maximise their score. One can even imagine whole new companies springing up who would take your low-scoring research paper and, for a price, revise it to get a better score. I somehow don't think this would benefit science. In defence of this approach, it was argued that it would allow us to identify characteristics of replicable work, and encourage people to emulate these. But this seems back-to-front logic. Why try to optimise an indirect, weak proxy for what makes good science (ngram characteristics of the write-up) rather than optimising, erm, good scientific practices. Recommended readings in this area include Philip Stark's short piece on Preproducibility, as well as Florian Markowetz's 'Five selfish reasons to work reproducibly'.

My reservations here are an extension of broader concerns about reliance on text-mining in meta-science (see e.g. https://peerj.com/articles/1715/https://peerj.com/articles/1715/). We have this wonderful ability to pull in mountains of data from online literature to see patterns that might be undetectable otherwise, But ultimately, the information that we extract cannot give more than a superficial sense of the content. It seems sometimes that we're moving to a situation where science will be done by bots, leaving the human brain out of the process altogether. This would, to my mind, be a mistake.

Sunday, 8 September 2019

Voting in the EU Referendum: Ignorance, deceit and folly

As a Remainer, I am baffled as to what Brexiteers want. If you ask them, as I sometimes do on Twitter, they mostly give you slogans such as "Taking Back Control". I'm more interested in specifics, i.e. what things do people think will be better for them if we leave. It is clear that things that matter to me – the economy, the health service, scientific research, my own freedom of movement in Europe - will be damaged by Brexit. I would put up with that if there was some compensating factor that would benefit other people, but I'm not convinced there is. In fact, all the indications are that people who voted to leave will suffer the negatives of Brexit just as much as those who voted to remain.

But are people who want to leave really so illogical? Brexit and its complexities is a long way from my expertise. I've tried to educate myself so that I can understand the debates about different options, but I'm aware that, despite being highly educated, I don't know much about the EU. I recently decided that, as someone who is interested in evidence, I should take a look at some of the surveys on this topic. We all suffer from confirmation bias, the tendency to seek out, process and remember just that information that agrees with our preconceptions, so I wanted to approach this as dispassionately as I could. The UK in a Changing Europe project is a useful starting place. They are funded by the Economic and Social Research Council, and appear to be even-handed. I have barely begun to scratch the surface of the content of their website, but I found their report Brexit and Public Opinion 2019 provides a useful and readable summary of recent academic research.

One paper summarised in the Brexit and Public Opinion 2019 report caught my attention in particular. Carl, Richards and Heath (2019) reported results from a survey of over 3000 people selected to be broadly representative of the British population, who were asked 15 questions about the EU. Overall, there was barely any difference between Leave and Remain voters in the accuracy of answers. The authors noted that results counteracted a common belief, put forward by some prominent commentators – that Leave voters had, on average, a weaker understanding of what they voted for than Remain voters. Interestingly, Carl et al did confirm, as other surveys had done, that those voting Leave were less well-educated than Remain voters, and indeed, in their study, the Leaver voters did less well on a test of probabilistic reasoning. But this was largely unrelated to their responses to the EU survey. The one factor that did differentiate Leave and Remain voters was how they responded to a subset of questions that were deemed 'ideologically convenient' for their position: I have replotted the data below*. As an aside, I 'm not entirely convinced by the categorisation of certain items as ideologically convenient - shown in the figure with £ and € symbols  - but that is a minor point.
Responses to survey items from Carl et al (2019) Table 1.  Items marked £ were regarded as ideologically convenient for Brexit voters; those marked € as convenient for Remain voters
I took a rather different message away from the survey, however. I have to start by saying that I was rather disappointed when I read the survey items, because they didn't focus on implications of EU membership for individuals. I would have liked to see items probing knowledge of how leaving the EU might affect trade, immigration and travel, and relations between England and the rest of the UK.The  survey questions rather tested factual knowledge about the EU, which could be scored using a simple Yes/No response format. It would have been perhaps more relevant, when seeking evidence for validity of the referendum, to assess how accurately people estimated the costs and benefits of EU membership.

With that caveat, the most striking thing to me was how poorly people did on the survey, regardless of whether they voted Leave or Remain. There were 15 two-choice questions. If people were just guessing at random, they would be expected to score on average 7.5, with 95% of people scoring between 4 and 11.  Carl et al plotted the distribution of scores (Figure 2) and noted that the average score was only 8.8, not much higher than what would be expected if people were just guessing. Only 11.2% of Leave voters and 13.1% of Remain voters scored 12 or more. However, the item-level responses indicate that people weren't just guessing, because there were systematic differences from item to item. On some items, people did better than chance. But, as Carl et al noted, there were four items where people performed below chance. Three of these items had been designated as "ideologically convenient" for the Remain position, and one as convenient for the Leave position.

Figure 1 from Carl et al (2019). Distributions of observed scores and scores expected under guessing.

Carl et al cited a book by Jason Brennan, Against Democracy, which argues that "political decisions are presumed to be unjust if they are made incompetently, or in bad faith, or by a generally incompetent decision-making body". I haven't read the book yet, but that seems a reasonable point.

However, having introduced us to Brennan's argument, Carl et al explained: "Although our study did not seek to determine whether voters overall were sufficiently well informed to satisfy Brennan's (2016) ‘competence principle’, it did seek to determine whether there was a significant disparity in knowledge between Leave and Remain voters, something which––if present––could also be considered grounds for questioning the legitimacy of the referendum result."

My view is that, while Carl et al may not have set out to test the competence principle, their study nevertheless provided evidence highly relevant to the principle, evidence that challenges the validity of the referendum. If one accepts the EU questionnaire as an indicator of competence, then both Leave and Remain voters are severely lacking. Not only do they show a woeful ignorance of the EU, they also, in some respects show evidence of systematic misunderstanding. 72% of Leave voters and 50% of Remain voters endorsed the statement that "More than ten per cent of British government spending goes to the EU." (Item M in Figure 1).  According to the Europa.eu website, the correct figure is 0.28%.  So the majority of people think that we send the EU at least 36 times more money than is the case. The lack of overall difference between Leave and Remain voters is of interest, but the levels of ignorance or systematic misunderstanding on key issues is striking in both groups. I don't exclude myself from this generalisation: I scored only 10 out of 15 in the survey, and there were some lucky guesses among my answers.

I have previously made a suggestion that seems in line with Jason Brennan's ideas – that if we were to have another referendum, people should have first to pass a simple quiz to demonstrate that they have a basic understanding of what they are voting for. The results of Carl et al suggest, however, that this would disenfranchise most of the population. Given how ignorant we are about the EU, it does seem remarkable that we are now in a position where we have a deeply polarised population, with people self-identifying as Brexit or Remain voters more strongly than they identify with political parties (Evans & Shaffner, 2019).

*I would like to thank Lindsay Richards for making the raw data available to me, in a very clear and well-documented format. 


Carl, N., Richards, L., & Heath, A. (2019). Leave and Remain voters’ knowledge of the EU after the referendum of 2016. Electoral Studies, 57, 90-98. doi:https://doi.org/10.1016/j.electstud.2018.11.003

Evans, G. & Schaffner, F. (2019). Brexit identity vs party identity. In A. Menon (Ed). Brexit and public opinion 2019.

Saturday, 10 August 2019

A day out at 10 Downing Street

Yesterday, I attended a meeting at 10, Downing Street with Dominic Cummings, special advisor to Boris Johnson, for a discussion about science funding. I suspect my invitation will be regarded, in hindsight, as a mistake, and I hope some hapless civil servant does not get into trouble over it. I discovered that I was on the invitation list because of a recommendation by the eminent mathematician, Tim Gowers, who is someone who is venerated by Cummings. Tim wasn't able to attend the meeting, but apparently he is a fan of my blog, and we have bonded over a shared dislike of the evil empire of Elsevier.  I had heard that Cummings liked bold, new ideas, and I thought that I might be able to contribute something, given that science funding is something I have blogged about. 

The invitation came on Tuesday and, having confirmed that it was not a spoof, I spent some time reading Cummings' blog, to get a better idea of where he was coming from. The impression is that he is besotted with science, especially maths and technology, and impatient with bureaucracy. That seemed promising common ground.

The problem, though, is that as a major facilitator of Brexit in 2016, who is now persisting with the idea that Brexit must be achieved "at any cost", he is doing immense damage, because science transcends national boundaries. Don't just take my word for it: it's a message that has been stressed by the President of the Royal Society, the Government's Chief Scientific Advisor, the Chair of the Wellcome Trust, the President of the Academy of Medical Sciences, and the Director of the Crick Institute, among others. 

The day before the meeting, I received an email to say that the topic of discussion would be much narrower than I had been led to believe. The other invitees were four Professors of Mathematics and the Director of the Engineering and Physical Sciences Research Council. We were sent a discussion document written by one of the professors outlining a wish list for improvements in funding for academic mathematics in the UK. I wasn't sure if I was a token woman: I suspect Cummings doesn't go in for token women and that my invite was simply because it had been assumed that someone recommended by Gowers would be a mathematician. I should add that my comments here are in a personal capacity and my views should not be taken as representing those of the University of Oxford.

The meeting started, rather as expected, with Cummings saying that we would not be talking about Brexit, because "everyone has different views about Brexit" and it would not be helpful. My suspicion was that everyone around the table other than Cummings had very similar views about Brexit, but I could see that we'd not get anywhere arguing the point. So we started off feeling rather like a patient who visits a doctor for medical advice, only to be told "I know I just cut off your leg, but let's not mention that."

The meeting proceeded in a cordial fashion, with Cummings expressing his strong desire to foster mathematics in British universities, and asking the mathematicians to come up with their "dream scenario" for dramatically enhancing the international standing of their discipline over the next few years. As one might expect, more funding for researchers at all levels, longer duration of funding, plus less bureaucracy around applying for funding were the basic themes, though Brexit-related issues did keep leaking in to the conversation – everyone was concerned about difficulties of attracting and retaining overseas talent, and about loss of international collaborations funded by the European Research Council. Cummings was clearly proud of the announcement on Thursday evening about easing of visa restrictions on overseas scientists, which has potential to go some way towards mitigating some of the problems created by Brexit. I felt, however, that he did not grasp the extent to which scientific research is an international activity, and breakthroughs depend on teams with complementary skills and perspectives, rather than the occasional "lone genius".  It's not just about attracting "the very best minds from around the world" to come and work here.

Overall, I found the meeting frustrating. First, I felt that Cummings was aware that there was a conflict between his twin aims of pursuit of Brexit and promotion of science, but he seemed to think this could be fixed by increasing funding and cutting regulation. I also wonder where on earth the money is coming from. Cummings made it clear that any proposals would need Treasury approval, but he encouraged the mathematicians to be ambitious, and talked as if anything was possible. In a week when we learn the economy is shrinking for the first time in years, it's hard to believe he has found the forest of magic money trees that are needed to cover recent spending announcements, let alone additional funding for maths.

Second, given Cummings' reputation, I had expected a far more wide-ranging discussion of different funding approaches. I fully support increased funding for fundamental mathematics, and did not want to cut across that discussion, so I didn't say much. I had, however, expected a bit more evidence of creativity. In his blog, Cummings refers to the Defense Advanced Research Projects Agency (DARPA), which is widely admired as a model for how to foster innovation. DARPA was set up in 1958 with the goal of giving the US superiority in military and other technologies. It combined blue-skies and problem-oriented research, and was immensely successful, leading to the development of the internet, among other things. In his preamble, Cummings briefly mentioned DARPA as a useful model. Yet, our discussion was entirely about capacity-building within existing structures.

Third, no mention was made of problem-oriented funding. Many scientists dislike having governments control what they work on, and indeed, blue-skies research often generates quite unexpected and beneficial outcomes. But we are in a world with urgent problems that would benefit from focussed attention of an interdisciplinary, and dare I say it, international group of talented scientists. In the past, it has taken world wars to force scientists to band together to find solutions to immediate threats. The rapid changes in the Arctic suggest that the climate emergency should be treated just like a war - a challenge to be tackled without delay. We should be deploying scientists, including mathematicians, to explore every avenue to mitigating the effects of global heating – physical and social – right now. Although there is interesting research on solar geoengineering going on at Harvard, it is clear that, under the Trump administration, we aren't going to see serious investment from the USA in tackling global heating. And, in any case, a global problem as complex as climate needs a multi-pronged solution. The economist Marianna Mazzucato understands this: her proposals for mission-oriented research take a different approach to the conventional funding agencies we have in the UK. Yet when I asked whether climate research was a priority in his planning, Cummings replied "it's not up to me". He said that there were lots of people pushing for more funding for research on "climate change or whatever", but he gave the impression that it was not something he would give priority to, and he did not display a sense of urgency. That's surprising in someone who is scientifically literate and has a child.

In sum, it's great that we have a special advisor who is committed to science. I'm very happy to see mathematics as a priority funding area. But I fear Dominic Cummings overestimates the extent to which he can mitigate the negative consequences of Brexit, and it is particularly unfortunate that his priorities do not include the climate emergency that is unfolding.

Saturday, 3 August 2019

Corrigendum: a word you may hope never to encounter

I have this week submitted a 'corrigendum' to a journal for an article published in the American Journal of Medical Genetics B (Bishop et al, 2006). It's just a fancy word for 'correction', and journals use it contrastively with 'erratum'. Basically, if the journal messes up and prints something wrong, it's an erratum. If the author is responsible for the mistake, it's a corrigendum.

 I'm trying to remember how many corrigenda I've written over the 40 odd years I've been publishing: there have been at least three previous cases that I can remember, but there could be more. I think this one was the worst; previous errors have tended to just affect numbers in a minor way. In this case, a whole table of numbers (table II) was thrown out, and although the main findings were upheld, there were some changes in the details.

I discovered the error when someone asked for the data for a meta-analysis. I was initially worried I would not be able to find the files, but fortunately, I had archived the dataset on a server, and eventually tracked it down. But it was not well-documented, and I then had the task of trawling through a number of cryptically-named files to try and work out which one was the basis for the data in the paper. My brain slowly reconstructed what the variable names meant and I got to the point of thinking I'd better check that this was the correct dataset by rerunning the analysis. Alas, although I could recreate most of what was published, I had the chilling realisation that there was a problem with Table II.

Table II was the one place in the analysis where, in trying to avoid one problem with the data (non-independence), I created a whole new problem (wrong numbers). I had data on siblings of children with autism, and in some cases there were two or three siblings in the family. These days I would have considered using a multilevel model to take family structure into account, but in 2005 I didn't know how to do that, and instead I decided to take a mean value for each family. So if there was one child, I used their score, but if there were 2 or 3, then I averaged them. The N was then the number of families, not the number of children.

And here, dear Reader, is where I made a fatal mistake. I thought the simplest way to do this would be by creating a new column in my Excel spreadsheet which had the mean for each family, computing this by manually entering a formula based on the row numbers for the siblings in that family. The number of families was small enough for this to be feasible, and all seemed well. However, I noticed when I opened the file that I had pasted a comment in red on the top row that said 'DO NOT SORT THIS FILE!'. Clearly, I had already run into problems with my method, which would be totally messed up if the rows were reordered. Despite my warning message to myself, somewhere along the line, it seems that a change was made to the numbering, and this meant that a few children had been assigned to the wrong family. And that's why table II had gremlins in it and needed correcting.

I now know that doing computations in Excel is almost always a bad idea, but in those days, I was innocent enough to be impressed with its computational possibilities. Now I use R, and life is transformed. The problem of computing a mean for each family can be scripted pretty easily, and then you have a lasting record of the analysis, which can be reproduced at any time. In my current projects, I aim to store data with a data dictionary and scripts on a repository such as Open Science Framework, with a link in the paper, so anyone can reconstruct the analysis, and I can find it easily if someone asks for the data. I wish I had learned about this years ago, but at least I can now use this approach with any new data – and I also aim to archive some old datasets as well.

For a journal, a corrigendum is a nuisance: they cost time and money in production costs, and are usually pretty hard to link up to the original article, so it may be seen as all a bit pointless. This is especially so given that a corrigendum is only appropriate if the error is not major. If an error would alter the conclusions that you'd draw from the data, then the paper will need to retracted. Nevertheless, it is important for the scientific record to be accurate, and I'm pleased to say that the American Journal of Medical Genetics took this seriously. They responded promptly to my email documenting the problem, suggesting I write a corrigendum, which I have now done.

I thought it worth blogging about this to show how much easier my life would have been if I had been using the practices of data management and analysis that I now am starting to adopt. I also felt it does no harm to write about making mistakes, which is usually a taboo subject. I've argued previously that we should be open about errors, to encourage others to report them, and to demonstrate how everyone makes mistakes, even when trying hard to be accurate (Bishop, 2018). So yes, mistakes happen, but you do learn from them.

Bishop, D. V. M. (2018). Fallibility in science: Responding to errors in the work of oneself and others (Commentary). Advances in Methods and Practices in Psychological Science, 1(3), 432-438 doi:10.1177/2515245918776632. (For free preprint see: https://peerj.com/preprints/3486/)

Bishop, D. V. M., Maybery, M., Wong, D., Maley, A., & Hallmayer, J. (2006). Characteristics of the broader phenotype in autism: a study of siblings using the Children's Communication Checklist - 2. American Journal of Medical Genetics Part B (Neuropsychiatric Genetics), 141B, 117-122.