Sunday, 20 October 2019

Harry Potter and the Beast of Brexit

The People's Vote march yesterday was a good opportunity to catch up on family news with my brother, Owen, as we shuffled along from Hyde Park to Parliament Square. We were joined by my colleague Amy Orben, appropriately kitted out in blue and yellow.

Amy and Owen, People's March, Oct 19 2019
As with previous anti-Brexit marches, the atmosphere was friendly, many of the placards were jokey, and the event was enlivened by the presence of unexpected groups who reinforced our view that the UK is a friendly, inclusive, if somewhat eccentric place. I saw no signs of aggression, except right at the end, when a group of thuggish young men stood outside a pub by Trafalgar Square, shouting insults at the marchers.
Morris not Boris dancers
But the underlying mood was sombre. There was a sense of inevitability that Brexit was going to be pushed through, regardless of the damage done to individuals and to the country. Propaganda has won. The 'Will of the People' is used to justify us accepting a bad deal. The phrase is seldom challenged by journalists, who allow interviewees to trot it out, alongside the mantra, 'Respect the Referendum'.

But it's nonsense, of course. The deal that Johnson has achieved respects nothing and nobody. It bears no relation to what the 52% voted for. Many people now realise they were conned by the pre-referendum propaganda, which promised a Brexit that would fix all kinds of problems – underfinancing of the NHS, immigration, housing, jobs, even climate change. As Sadiq Khan memorably said, nobody voted to be poorer. And few people would think the break-up of the United Kingdom is a reasonable price to pay for Brexit. It would take just 5% of Leavers to change their vote to Remain to change the outcome to favour Remain.

Even so, I'm not confident that another Referendum would lead to success for Remain. The problem is that Johnson and his cronies use dishonesty as a weapon. I feel like a character in a Harry Potter novel, where the good people are put at a disadvantage because they take ethical issues seriously.  That's why it's so important to hold our politicians to high standards: we have the Nolan principles of public life, but they lack teeth because they are just ignored. Meanwhile, those who want to preserve the country that we were proud to be part of – the one that came together so magnificently for the London Olympics in 2012 – aren't good at propaganda. Imagine if we had someone with the talent of Dominic Cummings fighting on our side: a propagandist who, instead of promoting fear and hatred, could manipulate people's opinions to make them feel pride and pleasure in being part of an inclusive, intelligent, peace-loving nation. Instead, those opposed to Brexit are divided, and show no signs of understanding how to campaign effectively – always put on the back foot. When we discuss the contents of Operation Yellowhammer, we are told this is Project Fear: an official government report is dismissed as Remain propaganda. Rather than making a pro-active case for remaining in the EU, we are manipulated into defending ourselves against preposterous accusations.

Despite the jokes and banter, the people marching yesterday were angry. We are angry to see our country wrecked for no good reason. I could put up with taking a personal hit to my standard of living if I could see that it benefited others – indeed I regularly vote for parties that propose higher taxation for people like me. The thing that is hard to stomach is the absence of coherent answers when you ask a Leaver about the benefits that will ensue after Brexit. I'm a rational person, and Brexit seems totally irrational – harming so many sectors of society while benefitting only the vulture capitalists. Meanwhile, on the international stage, our competitors and enemies must be enjoying the spectacle of seeing the EU being weakened, as we engage in this act of self-harm.

In the right-hand column below, are potential benefits of Brexit that have been put forward by the few people who actually engage when asked why they want to leave. In the left-hand column, I list risks of Brexit that are, as far as I am aware, adequately documented by people with expertise in these areas. Some of these, such as supply problems, are more relevant to no-Deal Brexit; others apply more broadly. There are dependencies between some of these: damage to farming, social care, NHS, science and Higher Education is a consequence of loss of EU workers: both from reluctance to live in a xenophobic country, and from legal restrictions on their employment here.  Disclaimer: I'm not an expert in politics and economics and I'd be glad to modify or add to the table if people can come up with well-evidenced arguments for doing so*.
My analysis of risks and benefits of Brexit
*Owen has commented on this (see below)
J.K. Rowling was prescient in her novels, which vividly described the triumph of propaganda over reason, of violence over peace, of the bully over the meek. With the Beast of Brexit, exemplified by Boris Johnson and his cronies, we see all these themes being played out in real life.

It is particularly galling when politicians argue that we have to have Brexit because otherwise there will be riots. In effect, this is saying that those who marched yesterday are to be ignored because they aren't violent. Of course, there are exceptions: I gather that it was not only Remain politicians who had to run the gauntlet of an angry crowd yesterday. Jacob Rees-Mogg was also verbally abused by a group of Remainers. I'm glad to say I have seen nobody defending such behaviour by either side. But politicians should not underestimate the genuine anger that is felt by Remainers, when people like Rees-Mogg claim in the Spectator that 'Everyone is saying “Just get on with it.” Moderate Remainers and Leavers alike are saying: “For goodness sake, please just finish it off.”’ One would hope that the thousands of moderate, peaceful marchers yesterday might disabuse him of that idea, yet I'm sure he'll continue to make these specious claims. Meanwhile, we are excluded from 'the People', are told we are undemocratic because we want a vote, and that we'd only be taken seriously if we started rioting.

I was particularly depressed to hear that some politicians had said they would support Boris Johnson's deal because they had received death threats from constituents. Have we really come to this? Are politicians saying to the people who marched yesterday that we'll only be listened to if we threaten to kill our opponents? Once we get to that point, we have lost all that is great about Britain. It is feeling perilously close.

Tuesday, 15 October 2019

The sinister side of French psychoanalysis revealed

Peak pseudoprofound bullshit* from Jacques Lacan; a proof that Woman does not exist
Sophie Robert, who created controversy in 2011 with her film 'Le Mur', has now produced a sequel, 'Le Phallus et le Neant'**, which extends her case against the dominance of psychoanalysis in French culture. In brief, the film makes the following points:
  1. Psychoanalysts enjoy a celebrity status in France as public intellectuals. 
  2. Their views are based heavily on writings of Sigmund Freud, Jacques Lacan and Françoise Dolto, and are not intellectually or scientifically coherent. 
  3. They promote ideas that are misogynistic and homophobic, and view small children as having erotic interest in their parents. Some of their statements appear to justify paedophilia and incest. 
  4. They do not see their role as helping cure psychiatric disorders. 
  5. They have a financial interest in maintaining their status quo. 
  6. Some of them work with vulnerable populations of children in institutions, which is especially troubling given point 3.
Le Mur focused on psychoanalytic treatment for autism (transcript available here); the new film has some overlap but is more structured around developing points 1-6, and raises further questions about the popularity of psychoanalysis for treatment of adult mental health problems in France. Although Robert notes at the outset that there are good practitioners who can help those who consult them, the overall message is that there are many analysts who do active harm to their clients, while charging them large sums of money. There appears to be no regulatory oversight of their activities.

Le Phallus et le Neant is a 2 hour-long film, and I recommend watching it in full; I started by finding the analysts merely irritating and pretentious, but as the film developed, it became increasingly disturbing. The last quarter included interviews with women who had suffered sexual abuse as children, and who were told they should not see themselves as victims.

Here are just a few clips to illustrate the content of the interviews with analysts.

Much of the first part of the film focuses on the negative views of Woman proposed by Freud and Lacan. Penis envy is taken extremely seriously.
Relationships between parents and their children are seen as complicated and problematic:

A cheerful and positive attitude to sex seems unattainable:

Regarding homosexuality, the film notes the influence of the late Andre Green, who according to Wikipedia was 'one of the most important psychoanalytic thinkers of our times'. Green regarded homosexuality as a psychosis. Confronted with evidence of well-balanced and contented gay men, he claimed they were psychotics-in-denial, apparently healthy but likely to fall prey to insanity at any time. Sophie probed her interviewees about this, and they looked cagey, particularly when asked if there were any gay psychoanalysts. The idea of gay couples as parents has been highly contentious in France: if we believed the psychoanalysts, this would be a disaster. In fact, as shown by the work of Susan Golombok and colleagues, it isn't anything of the kind.

If you argue against the views of the analysts, by saying you never wanted a penis, you had a loving but unerotic relationship with your parents, and you find adult sex fun, then this is treated as evidence of the depth of your repression, rather than evidence for the invalidity of the theory.

The late Françoise Dolto had a major influence on psychoanalytic thought in France. Her claims  that children have desire towards adults, and trap them because of this, were reflected at several points in the interviews.
And given these provocative children, it seems that a father who commits incest with his child is really only doing what comes naturally:

A final point is the mismatch between the expectations of clients and what the psychoanalyst offers. One assumes that most people go into analysis with the expectation that it will help them: after all, they invest a great deal of time and money in the process. But that does not seem to be the view of the analysts. Their attitude seems to be that the human situation is pretty hopeless, because what people want (sex with a parent) is not possible, and the best they can do is to help us come to realise that:




* This term is taken from Pennycook, G., Cheyne, J. A., Barr, N., Koehler, D. J., & Fugelsang, J. A. (2015). On the reception and detection of pseudo-profound bullshit. Judgment and Decision Making, 10, 549-563.

**A version of the film with English subtitles is publicly available on Vimeo at a cost of €4 . Conflict of interest statement: I contributed to the funding of the film, but I will donate any royalties to charity.

Tuesday, 10 September 2019

Responding to the replication crisis: reflections on Metascience2019

Talk by Yang Yang at Metascience 2019
I'm just back from MetaScience 2019. It was an immense privilege to be invited to speak at such a stimulating and timely meeting, and I would like to thank the Fetzer Franklin Fund, who not only generously funded the meeting, but also ensured the impressively smooth running of a packed schedule. The organisers - Brian Nosek, Jonathan Schooler, Jon Krosnick, Leif Nelson and Jan Walleczek - did a great job in bringing together speakers on a range of topics, and the quality of talks was outstanding. For me, highlights were hearing great presentations from people well outside my usual orbit, such as Melissa Schilling on 'Where do breakthrough ideas come from?', Carl Bergstrom on modelling grant funding systems, and Callin O'Connor on scientific polarisation.

The talks were recorded, but I gather it may be some months before the film is available. Meanwhile, slides of many of the presentations are available here, and there is a copious Twitter stream on the hashtag #metascience2019. Special thanks are due to Joseph Fridman (@joseph_fridman): if you look at his timeline, you can pretty well reconstruct the entire meeting from live tweets. Noah Haber (@NoahHaber) also deserves special mention for extensive commentary, including a post-conference reflection starting here.  It is a sign of a successful meeting, I think, if it gets people, like Noah, raising more general questions about the direction the field is going in, and it is in that spirit I would like to share some of my own thoughts.

In the past 15 years or so, we have made enormous progress in documenting problems with credibility of research findings, not just in psychology, but in many areas of science. Metascience studies have helped us quantify the extent of the problem and begun to shed light on the underlying causes. We now have to confront the question of what we do next. That would seem to be a no-brainer: we need to concentrate on fixing the problem. But there is a real danger of rushing in with well-intentioned solutions that may be ineffective at best or have unintended consequences at worst.

One question is whether we should be continuing with a focus on replication studies. Noah Haber was critical of the number of talks that focused on replication, but I had a rather different take on this: it depends on what the purpose of a replication study is. I think further replication initiatives, in the style of the original Reproducibility Project, can be invaluable in highlighting problems (or not) in a field. Tim Errington's talk about the Cancer Biology Reproducibility Project demonstrated beautifully how a systematic attempt to replicate findings can reveal major problems in a field. Studies in this area are often dependent on specialised procedures and materials, which are either poorly described or unavailable. In such circumstances it becomes impossible for other labs to reproduce the methods, let alone replicate the results. The mindset of many researchers in this area is also unhelpful – the sense is that competition dominates, and open science ideals are not part of the training of scientists. But these are problems that can be fixed.

As was evident from my questions after the talk, I was less enthused by the idea of doing a large, replication of Darryl Bem's studies on extra-sensory perception. Zoltán Kekecs and his team have put in a huge amount of work to ensure that this study meets the highest standards of rigour, and it is a model of collaborative planning, ensuring input into the research questions and design from those with very different prior beliefs. I just wondered what the point was. If you want to put in all that time, money and effort, wouldn't it be better to investigate a hypothesis about something that doesn't contradict the laws of physics? There were two responses to this. Zoltán's view was that the study would tell us more than whether or not precognition exists: it would provide a model of methods that could be extended to other questions. That seems reasonable: some of the innovations, in terms of automated methods and collaborative working could be applied in other contexts to ensure original research was done to the highest standards. Jonathan Schooler, on the other hand, felt it was unscientific of me to prejudge the question, given a large previous literature of positive findings on ESP, including a meta-analysis. Given that I come from a field where there are numerous phenomena that have been debunked after years of apparent positive evidence, I was not swayed by this argument. (See for instance this blogpost on 5-HTTLPR and depression). If the study by Kekecs et al sets such a high standard that the results will be treated as definitive, then I guess it might be worthwhile. But somehow I doubt that a null finding in this study will convince believers to abandon this line of work.

Another major concern I had was the widespread reliance on proxy indicators of research quality. One talk that exemplified this was Yang Yang's presentation on machine intelligence approaches to predicting replicability of studies. He started by noting that non-replicable results get cited just as much as replicable ones: a depressing finding indeed, and one that motivated the study he reported. His talk was clever at many levels. It was ingenious to use the existing results from the Reproducibility Project as a database that could be mined to identify characteristics of results that replicated. I'm not qualified to comment on the machine learning approach, which involved using ngrams extracted from texts to predict a binary category of replicable or not. But implicit in this study was the idea that the results from this exercise could be useful in future in helping us identify, just on the basis of textual analysis, which studies were likely to be replicable.

Now, this seems misguided on several levels. For a start, as we know from the field of medical screening, the usefulness of a screening test depends on the base rate of the condition you are screening for, the extent to which the sample you develop the test on is representative of the population, and the accuracy of prediction. I would be frankly amazed if the results of this exercise yielded a useful screener. But even if they did, then Goodhart's law would kick in: as soon as researchers became aware that there was a formula being used to predict how replicable their research was, they'd write their papers in a way that would maximise their score. One can even imagine whole new companies springing up who would take your low-scoring research paper and, for a price, revise it to get a better score. I somehow don't think this would benefit science. In defence of this approach, it was argued that it would allow us to identify characteristics of replicable work, and encourage people to emulate these. But this seems back-to-front logic. Why try to optimise an indirect, weak proxy for what makes good science (ngram characteristics of the write-up) rather than optimising, erm, good scientific practices. Recommended readings in this area include Philip Stark's short piece on Preproducibility, as well as Florian Markowetz's 'Five selfish reasons to work reproducibly'.

My reservations here are an extension of broader concerns about reliance on text-mining in meta-science (see e.g. https://peerj.com/articles/1715/https://peerj.com/articles/1715/). We have this wonderful ability to pull in mountains of data from online literature to see patterns that might be undetectable otherwise, But ultimately, the information that we extract cannot give more than a superficial sense of the content. It seems sometimes that we're moving to a situation where science will be done by bots, leaving the human brain out of the process altogether. This would, to my mind, be a mistake.









Sunday, 8 September 2019

Voting in the EU Referendum: Ignorance, deceit and folly

As a Remainer, I am baffled as to what Brexiteers want. If you ask them, as I sometimes do on Twitter, they mostly give you slogans such as "Taking Back Control". I'm more interested in specifics, i.e. what things do people think will be better for them if we leave. It is clear that things that matter to me – the economy, the health service, scientific research, my own freedom of movement in Europe - will be damaged by Brexit. I would put up with that if there was some compensating factor that would benefit other people, but I'm not convinced there is. In fact, all the indications are that people who voted to leave will suffer the negatives of Brexit just as much as those who voted to remain.

But are people who want to leave really so illogical? Brexit and its complexities is a long way from my expertise. I've tried to educate myself so that I can understand the debates about different options, but I'm aware that, despite being highly educated, I don't know much about the EU. I recently decided that, as someone who is interested in evidence, I should take a look at some of the surveys on this topic. We all suffer from confirmation bias, the tendency to seek out, process and remember just that information that agrees with our preconceptions, so I wanted to approach this as dispassionately as I could. The UK in a Changing Europe project is a useful starting place. They are funded by the Economic and Social Research Council, and appear to be even-handed. I have barely begun to scratch the surface of the content of their website, but I found their report Brexit and Public Opinion 2019 provides a useful and readable summary of recent academic research.

One paper summarised in the Brexit and Public Opinion 2019 report caught my attention in particular. Carl, Richards and Heath (2019) reported results from a survey of over 3000 people selected to be broadly representative of the British population, who were asked 15 questions about the EU. Overall, there was barely any difference between Leave and Remain voters in the accuracy of answers. The authors noted that results counteracted a common belief, put forward by some prominent commentators – that Leave voters had, on average, a weaker understanding of what they voted for than Remain voters. Interestingly, Carl et al did confirm, as other surveys had done, that those voting Leave were less well-educated than Remain voters, and indeed, in their study, the Leaver voters did less well on a test of probabilistic reasoning. But this was largely unrelated to their responses to the EU survey. The one factor that did differentiate Leave and Remain voters was how they responded to a subset of questions that were deemed 'ideologically convenient' for their position: I have replotted the data below*. As an aside, I 'm not entirely convinced by the categorisation of certain items as ideologically convenient - shown in the figure with £ and € symbols  - but that is a minor point.
Responses to survey items from Carl et al (2019) Table 1.  Items marked £ were regarded as ideologically convenient for Brexit voters; those marked € as convenient for Remain voters
I took a rather different message away from the survey, however. I have to start by saying that I was rather disappointed when I read the survey items, because they didn't focus on implications of EU membership for individuals. I would have liked to see items probing knowledge of how leaving the EU might affect trade, immigration and travel, and relations between England and the rest of the UK.The  survey questions rather tested factual knowledge about the EU, which could be scored using a simple Yes/No response format. It would have been perhaps more relevant, when seeking evidence for validity of the referendum, to assess how accurately people estimated the costs and benefits of EU membership.

With that caveat, the most striking thing to me was how poorly people did on the survey, regardless of whether they voted Leave or Remain. There were 15 two-choice questions. If people were just guessing at random, they would be expected to score on average 7.5, with 95% of people scoring between 4 and 11.  Carl et al plotted the distribution of scores (Figure 2) and noted that the average score was only 8.8, not much higher than what would be expected if people were just guessing. Only 11.2% of Leave voters and 13.1% of Remain voters scored 12 or more. However, the item-level responses indicate that people weren't just guessing, because there were systematic differences from item to item. On some items, people did better than chance. But, as Carl et al noted, there were four items where people performed below chance. Three of these items had been designated as "ideologically convenient" for the Remain position, and one as convenient for the Leave position.

Figure 1 from Carl et al (2019). Distributions of observed scores and scores expected under guessing.

Carl et al cited a book by Jason Brennan, Against Democracy, which argues that "political decisions are presumed to be unjust if they are made incompetently, or in bad faith, or by a generally incompetent decision-making body". I haven't read the book yet, but that seems a reasonable point.

However, having introduced us to Brennan's argument, Carl et al explained: "Although our study did not seek to determine whether voters overall were sufficiently well informed to satisfy Brennan's (2016) ‘competence principle’, it did seek to determine whether there was a significant disparity in knowledge between Leave and Remain voters, something which––if present––could also be considered grounds for questioning the legitimacy of the referendum result."

My view is that, while Carl et al may not have set out to test the competence principle, their study nevertheless provided evidence highly relevant to the principle, evidence that challenges the validity of the referendum. If one accepts the EU questionnaire as an indicator of competence, then both Leave and Remain voters are severely lacking. Not only do they show a woeful ignorance of the EU, they also, in some respects show evidence of systematic misunderstanding. 72% of Leave voters and 50% of Remain voters endorsed the statement that "More than ten per cent of British government spending goes to the EU." (Item M in Figure 1).  According to the Europa.eu website, the correct figure is 0.28%.  So the majority of people think that we send the EU at least 36 times more money than is the case. The lack of overall difference between Leave and Remain voters is of interest, but the levels of ignorance or systematic misunderstanding on key issues is striking in both groups. I don't exclude myself from this generalisation: I scored only 10 out of 15 in the survey, and there were some lucky guesses among my answers.

I have previously made a suggestion that seems in line with Jason Brennan's ideas – that if we were to have another referendum, people should have first to pass a simple quiz to demonstrate that they have a basic understanding of what they are voting for. The results of Carl et al suggest, however, that this would disenfranchise most of the population. Given how ignorant we are about the EU, it does seem remarkable that we are now in a position where we have a deeply polarised population, with people self-identifying as Brexit or Remain voters more strongly than they identify with political parties (Evans & Shaffner, 2019).

*I would like to thank Lindsay Richards for making the raw data available to me, in a very clear and well-documented format. 

References

Carl, N., Richards, L., & Heath, A. (2019). Leave and Remain voters’ knowledge of the EU after the referendum of 2016. Electoral Studies, 57, 90-98. doi:https://doi.org/10.1016/j.electstud.2018.11.003

Evans, G. & Schaffner, F. (2019). Brexit identity vs party identity. In A. Menon (Ed). Brexit and public opinion 2019.

Saturday, 10 August 2019

A day out at 10 Downing Street

Yesterday, I attended a meeting at 10, Downing Street with Dominic Cummings, special advisor to Boris Johnson, for a discussion about science funding. I suspect my invitation will be regarded, in hindsight, as a mistake, and I hope some hapless civil servant does not get into trouble over it. I discovered that I was on the invitation list because of a recommendation by the eminent mathematician, Tim Gowers, who is someone who is venerated by Cummings. Tim wasn't able to attend the meeting, but apparently he is a fan of my blog, and we have bonded over a shared dislike of the evil empire of Elsevier.  I had heard that Cummings liked bold, new ideas, and I thought that I might be able to contribute something, given that science funding is something I have blogged about. 

The invitation came on Tuesday and, having confirmed that it was not a spoof, I spent some time reading Cummings' blog, to get a better idea of where he was coming from. The impression is that he is besotted with science, especially maths and technology, and impatient with bureaucracy. That seemed promising common ground.

The problem, though, is that as a major facilitator of Brexit in 2016, who is now persisting with the idea that Brexit must be achieved "at any cost", he is doing immense damage, because science transcends national boundaries. Don't just take my word for it: it's a message that has been stressed by the President of the Royal Society, the Government's Chief Scientific Advisor, the Chair of the Wellcome Trust, the President of the Academy of Medical Sciences, and the Director of the Crick Institute, among others. 

The day before the meeting, I received an email to say that the topic of discussion would be much narrower than I had been led to believe. The other invitees were four Professors of Mathematics and the Director of the Engineering and Physical Sciences Research Council. We were sent a discussion document written by one of the professors outlining a wish list for improvements in funding for academic mathematics in the UK. I wasn't sure if I was a token woman: I suspect Cummings doesn't go in for token women and that my invite was simply because it had been assumed that someone recommended by Gowers would be a mathematician. I should add that my comments here are in a personal capacity and my views should not be taken as representing those of the University of Oxford.

The meeting started, rather as expected, with Cummings saying that we would not be talking about Brexit, because "everyone has different views about Brexit" and it would not be helpful. My suspicion was that everyone around the table other than Cummings had very similar views about Brexit, but I could see that we'd not get anywhere arguing the point. So we started off feeling rather like a patient who visits a doctor for medical advice, only to be told "I know I just cut off your leg, but let's not mention that."

The meeting proceeded in a cordial fashion, with Cummings expressing his strong desire to foster mathematics in British universities, and asking the mathematicians to come up with their "dream scenario" for dramatically enhancing the international standing of their discipline over the next few years. As one might expect, more funding for researchers at all levels, longer duration of funding, plus less bureaucracy around applying for funding were the basic themes, though Brexit-related issues did keep leaking in to the conversation – everyone was concerned about difficulties of attracting and retaining overseas talent, and about loss of international collaborations funded by the European Research Council. Cummings was clearly proud of the announcement on Thursday evening about easing of visa restrictions on overseas scientists, which has potential to go some way towards mitigating some of the problems created by Brexit. I felt, however, that he did not grasp the extent to which scientific research is an international activity, and breakthroughs depend on teams with complementary skills and perspectives, rather than the occasional "lone genius".  It's not just about attracting "the very best minds from around the world" to come and work here.

Overall, I found the meeting frustrating. First, I felt that Cummings was aware that there was a conflict between his twin aims of pursuit of Brexit and promotion of science, but he seemed to think this could be fixed by increasing funding and cutting regulation. I also wonder where on earth the money is coming from. Cummings made it clear that any proposals would need Treasury approval, but he encouraged the mathematicians to be ambitious, and talked as if anything was possible. In a week when we learn the economy is shrinking for the first time in years, it's hard to believe he has found the forest of magic money trees that are needed to cover recent spending announcements, let alone additional funding for maths.

Second, given Cummings' reputation, I had expected a far more wide-ranging discussion of different funding approaches. I fully support increased funding for fundamental mathematics, and did not want to cut across that discussion, so I didn't say much. I had, however, expected a bit more evidence of creativity. In his blog, Cummings refers to the Defense Advanced Research Projects Agency (DARPA), which is widely admired as a model for how to foster innovation. DARPA was set up in 1958 with the goal of giving the US superiority in military and other technologies. It combined blue-skies and problem-oriented research, and was immensely successful, leading to the development of the internet, among other things. In his preamble, Cummings briefly mentioned DARPA as a useful model. Yet, our discussion was entirely about capacity-building within existing structures.

Third, no mention was made of problem-oriented funding. Many scientists dislike having governments control what they work on, and indeed, blue-skies research often generates quite unexpected and beneficial outcomes. But we are in a world with urgent problems that would benefit from focussed attention of an interdisciplinary, and dare I say it, international group of talented scientists. In the past, it has taken world wars to force scientists to band together to find solutions to immediate threats. The rapid changes in the Arctic suggest that the climate emergency should be treated just like a war - a challenge to be tackled without delay. We should be deploying scientists, including mathematicians, to explore every avenue to mitigating the effects of global heating – physical and social – right now. Although there is interesting research on solar geoengineering going on at Harvard, it is clear that, under the Trump administration, we aren't going to see serious investment from the USA in tackling global heating. And, in any case, a global problem as complex as climate needs a multi-pronged solution. The economist Marianna Mazzucato understands this: her proposals for mission-oriented research take a different approach to the conventional funding agencies we have in the UK. Yet when I asked whether climate research was a priority in his planning, Cummings replied "it's not up to me". He said that there were lots of people pushing for more funding for research on "climate change or whatever", but he gave the impression that it was not something he would give priority to, and he did not display a sense of urgency. That's surprising in someone who is scientifically literate and has a child.

In sum, it's great that we have a special advisor who is committed to science. I'm very happy to see mathematics as a priority funding area. But I fear Dominic Cummings overestimates the extent to which he can mitigate the negative consequences of Brexit, and it is particularly unfortunate that his priorities do not include the climate emergency that is unfolding.

Saturday, 3 August 2019

Corrigendum: a word you may hope never to encounter


I have this week submitted a 'corrigendum' to a journal for an article published in the American Journal of Medical Genetics B (Bishop et al, 2006). It's just a fancy word for 'correction', and journals use it contrastively with 'erratum'. Basically, if the journal messes up and prints something wrong, it's an erratum. If the author is responsible for the mistake, it's a corrigendum.

 I'm trying to remember how many corrigenda I've written over the 40 odd years I've been publishing: there have been at least three previous cases that I can remember, but there could be more. I think this one was the worst; previous errors have tended to just affect numbers in a minor way. In this case, a whole table of numbers (table II) was thrown out, and although the main findings were upheld, there were some changes in the details.

I discovered the error when someone asked for the data for a meta-analysis. I was initially worried I would not be able to find the files, but fortunately, I had archived the dataset on a server, and eventually tracked it down. But it was not well-documented, and I then had the task of trawling through a number of cryptically-named files to try and work out which one was the basis for the data in the paper. My brain slowly reconstructed what the variable names meant and I got to the point of thinking I'd better check that this was the correct dataset by rerunning the analysis. Alas, although I could recreate most of what was published, I had the chilling realisation that there was a problem with Table II.

Table II was the one place in the analysis where, in trying to avoid one problem with the data (non-independence), I created a whole new problem (wrong numbers). I had data on siblings of children with autism, and in some cases there were two or three siblings in the family. These days I would have considered using a multilevel model to take family structure into account, but in 2005 I didn't know how to do that, and instead I decided to take a mean value for each family. So if there was one child, I used their score, but if there were 2 or 3, then I averaged them. The N was then the number of families, not the number of children.

And here, dear Reader, is where I made a fatal mistake. I thought the simplest way to do this would be by creating a new column in my Excel spreadsheet which had the mean for each family, computing this by manually entering a formula based on the row numbers for the siblings in that family. The number of families was small enough for this to be feasible, and all seemed well. However, I noticed when I opened the file that I had pasted a comment in red on the top row that said 'DO NOT SORT THIS FILE!'. Clearly, I had already run into problems with my method, which would be totally messed up if the rows were reordered. Despite my warning message to myself, somewhere along the line, it seems that a change was made to the numbering, and this meant that a few children had been assigned to the wrong family. And that's why table II had gremlins in it and needed correcting.

I now know that doing computations in Excel is almost always a bad idea, but in those days, I was innocent enough to be impressed with its computational possibilities. Now I use R, and life is transformed. The problem of computing a mean for each family can be scripted pretty easily, and then you have a lasting record of the analysis, which can be reproduced at any time. In my current projects, I aim to store data with a data dictionary and scripts on a repository such as Open Science Framework, with a link in the paper, so anyone can reconstruct the analysis, and I can find it easily if someone asks for the data. I wish I had learned about this years ago, but at least I can now use this approach with any new data – and I also aim to archive some old datasets as well.

For a journal, a corrigendum is a nuisance: they cost time and money in production costs, and are usually pretty hard to link up to the original article, so it may be seen as all a bit pointless. This is especially so given that a corrigendum is only appropriate if the error is not major. If an error would alter the conclusions that you'd draw from the data, then the paper will need to retracted. Nevertheless, it is important for the scientific record to be accurate, and I'm pleased to say that the American Journal of Medical Genetics took this seriously. They responded promptly to my email documenting the problem, suggesting I write a corrigendum, which I have now done.

I thought it worth blogging about this to show how much easier my life would have been if I had been using the practices of data management and analysis that I now am starting to adopt. I also felt it does no harm to write about making mistakes, which is usually a taboo subject. I've argued previously that we should be open about errors, to encourage others to report them, and to demonstrate how everyone makes mistakes, even when trying hard to be accurate (Bishop, 2018). So yes, mistakes happen, but you do learn from them.

References 
Bishop, D. V. M. (2018). Fallibility in science: Responding to errors in the work of oneself and others (Commentary). Advances in Methods and Practices in Psychological Science, 1(3), 432-438 doi:10.1177/2515245918776632. (For free preprint see: https://peerj.com/preprints/3486/)

Bishop, D. V. M., Maybery, M., Wong, D., Maley, A., & Hallmayer, J. (2006). Characteristics of the broader phenotype in autism: a study of siblings using the Children's Communication Checklist - 2. American Journal of Medical Genetics Part B (Neuropsychiatric Genetics), 141B, 117-122.

Saturday, 20 July 2019

A call for funders to ban institutions that use grant capture targets

I  caused unease on Twitter this week when I criticised a piece in the Times Higher Education on 'How to win a research grant'. As I explained in a series of tweets, I have no objection to experienced grant-holders sharing their pearls of wisdom with other academics: indeed, I've given my own tips in the past. My objection was to the sentiment behind the lede beneath the headline: "Even in disciplines in which research is inherently inexpensive, ‘grant capture’ is increasingly being adopted as a metric to judge academics and universities. But with success rates typically little better than one in five, rejection is the fate of most applications." I made the observation that it might have been better if the Times Higher had noted that grant capture is a stupid way to evaluate academics.

Science is in trouble when the getting of grant funding is seen as an end in itself rather than a means to the end of doing good research, with researchers rewarded in proportion to how much money they bring in. I've rehearsed the arguments for this view more than once on my blog (see, e.g. here); many of these points were anticipated by Raphael Gillett in 1991, long before 'grant capture' became widespread as an explicit management tool. Although my view is shared by some other senior figures (see, e.g., this piece by John Ioannidis), it is seldom voiced. When I suggested that the best approach to seeking funding was to wait until you had a great idea that you were itching to implement, the patience of my followers snapped. It was clear that to many people working in academia, this view is seen as naive and unrealistic. Quite simply, it's a case of get funded or get fired. When I started out, use of funding success may have been used informally to rate academics, but now it is often explicit, sometimes to the point whereby expected grant income targets are specified.

Encouraging more and more grant submissions is toxic, both for researchers and for science, but everyone feels trapped. So how could we escape from this fix?

I think the solution has to be down to funders. They should be motivated to tackle the problem for several reasons.
  • First, they are inundated with far more proposals than they can fund - to the extent that many of them use methods of "demand management" to stem the tide. 
  • Second, if people are pressurised into coming up with research projects in order to become or remain employed, this is not likely to lead to particularly good research. We might expect quality of proposals to improve if people are encouraged to take time to develop and hone a great idea.
  • Third, although peer review of grants is generally thought to be the best among various unsatisfactory options for selecting grants, it is known to have poor reliability, and there is an element of lottery as to who gets funded. There's a real risk that, with grant capture being used as a metric, many researchers are being lost from the system because they were unlucky rather than untalented. 
  • Fourth, if people are evaluated in terms of the amount of funding they acquire, they will be motivated to make their proposals as expensive as possible: this cannot be in the interests of the funders.
Funders have considerable power in their hands and they can use it to change the culture. This was neatly demonstrated when the Athena SWAN charter started up, originally focused on improving gender equality in STEMM subjects. Institutions paid lip service to it, but there was little action until the Chief Medical Officer, Sally Davies, declared that to be eligible for biomedical funding from NIHR, institutions would have to have a Silver Athena SWAN award.  This raising of the stakes concentrated the minds of Vice Chancellors to an impressive degree.

My suggestion is that major funders such as Research EnglandWellcome Trust and Cancer Research UK could at a stroke improve research culture in the UK by implementing a rule whereby any institution that used grant capture as a criterion for hiring, firing or promotion would be ineligible to host grants. 

Reference
Gillett, R. (1991). Pitfalls in assessing research performance by grant income. Scientometrics, 22(2), 253-263.

Sunday, 26 May 2019

The Do It Yourself (DIY) conference


This blogpost was inspired by a tweet from Natalie Jester, a PhD student at the School for Sociology, Politics and International Studies at the University of Bristol, raising this question:


I agreed with her, noting that the main costs were venue hire and speaker expenses, but that prices were often hiked by organisers using lavish venues and aiming to make a profit from the meeting. I linked to my earlier post about the eye-watering profits that the Society for Neuroscience makes from its meetings.  In contrast, the UK's Experimental Psychology Society uses its income membership fees and the journal to support meetings three times a year, and doesn't even charge a registration fee.

Pradeep Reddy Raamana, a Canadian Open Neuroscience scholar from Toronto responded, drawing my attention to a thread on this very topic from a couple of weeks ago.



There were useful suggestions in the thread, including reducing costs by spending less on luxurious accommodation for organisers, and encouraging PIs to earmark funds for their junior staff to cover their conference attendance costs.

That's all fine, but my suggestion is for a radically different approach, which is to find a small group of 2-3 like-minded people and organise your own conference. I'm sure that people will respond by saying that they have to go to the big society meetings in their field in order to network and promote their research.  There's nothing in my suggestions that would preclude you also doing this (though see climate emergency point below). But I suspect that if you go down the DIY route, you may get a lot more out of the experience than you would by attending a big, swish society conference: both in terms of personal benefits and career prospects.

I'm sure people will want to add to these ideas, but here's my experience, which is based on running various smallish meetings, including being local organiser for occasional EPS meetings over the years. I was also, with Katharine Perera, Gina Conti-Ramsden and Elena Lieven,  a co-organiser of the Child Language Seminar (CLS) in Manchester back in the 1980s.  That is perhaps the best example of a DIY conference, because we had no infrastructure and just worked it out as we went along.  The CLS was a very ad hoc thing: each year, the meeting organisers tried to find someone who was prepared to run the next CLS at their own institution the following year. Despite this informality, the CLS – now with the more appropriate name of Child Language Symposium – is still going strong in 2019. From memory, we had around 120 people from all over the world at the Manchester meeting. Numbers have grown over the years, but in general if you were doing a DIY meeting for the first time, I'd aim to keep it small; no more than 200 people.

The main costs you will incur in organising a meeting are:
  • Venue
  • Refreshments
  • Reception/Conference dinner
  • Expenses for speakers
  • Administrative costs
  • Publicity
Your income to cover these costs will come from:
  • Grants (optional)
  • Registration fees

So the main thing to do at the start is to sit down and do some sums to ensure you will break even. Here's my experiences on each of these categories:

Venue

You do not need to hold the meeting at a swanky hotel. Your university is likely to have conference facilities: check out their rates. Consider what you need in terms of lecture theatre capacity, break-out rooms, rooms for posters/refreshments.  You need to factor in cost of technical support. My advice is you should let people look after their own accommodation: at most just give them a list of places to stay. This massively cuts down on your workload.

Refreshments

The venue should be able to offer teas/coffees. You will probably be astounded at what institutions charge for a cup of instant coffee and a boring biscuit, but I recommend you go with the flow on that one. People do need their coffee breaks, however humble the refreshments.

Reception/Conference dinner

A welcome reception is a good way of breaking the ice on the first evening. It need not be expensive: a few bottles of wine plus water and soft drinks and some nibbles is adequate. You could just find a space to do this and provide the refreshments yourselves: most of the EPS meetings I've been to just have some bottles provided and people help themselves. This will be cheaper than rates from conference organisers.

You don't have to have a conference dinner. They can be rather stuffy affairs, and a torment for shy people who don't know anyone. On the other hand, when they work well, they provide an opportunity to get to know people and chat about work informally. My experience at EPS and CLS is that the easiest way to organise this is to book a local restaurant. They will probably suggest a set meal at a set price, with people selecting options in advance. This will involve some admin work – see below.

Expenses for speakers

For a meeting like CLS there are a small number of invited plenary speakers. This is your opportunity to invite the people you really want to hear from. It's usual to offer economy class travel and accommodation in a good hotel. This does not need to be lavish, but it should have quiet rooms with ensuite bathroom, large, comfortable bed, desk area, sufficient power supply, adequate aircon/heating, and free wifi. Someone who has flown around the world to come to your meeting is not going to remember you fondly if they are put up in a cramped bed and breakfast. I've had some dismal experiences over the years and now check TripAdvisor to make sure I've not been booked in somewhere awful.  I still remember attending a meeting where an eminent speaker had flown in from North America only to find herself put in student accommodation: she turned around and booked herself into a hotel, and left with dismal memories of the organisers.

Pradeep noted that conferences could save costs if speakers covered their own expenses. This is true and many do have funds that they could use for this purpose. But don't assume that is the case: if they do have funds, you'd have to consider why they'd rather spend that money on coming to your meeting, than on something else. A diplomatic way of discussing this is to say in the letter of invitation that you can cover economy class travel, accommodation, dinner and registration. However, if they have funds that could be used for their travel, then that will make it possible to offer some sponsored places to students.

Administration

It's easy to overlook this item, but fortunately it is now relatively simple to handle registrations with online tools such as EventBrite. They take a cut if you charge for registration, but that's well worth it in my experience, in terms of saving a lot of grief with spreadsheets. If you are going for a conference dinner, then booking for this can be bundled in with registration fee.

In the days of Manchester CLS, email barely existed and nobody expected a conference website, but nowadays that is mandatory, and so you will need someone willing to set it up and populate it with information about venue and programme. As with my other advice, no need to make it fancy; just ensure there is the basic information that people need, with a link to a place for registration.

There are further items like setting up the Eventbrite page, making conference badges, and ensuring smooth communications with venue, speakers and restaurant. Here the main thing is to delegate responsibility so everyone knows what they have to do. I've quite often experienced the situation where I've agreed to speak at a meeting only to find that nobody has contacted me about the programme or venue and it's only a week to go.

On the day, you'll be glad of assistants who can do things like shepherding people into sessions, taking messages, etc. You can offer free registration to local students in return for them acting in this role.

Publicity

I've listed this under costs, but I've never spent on this for meetings I've organised, and given social media, I don't think you'll need to.

Grants

I've put optional for grants, as you can cover costs without a grant. But every bit of money helps and it's possible that one of the organisers will have funding that can be used. However, my advice is to check out options for grant funding from a society or other funder. National funding bodies such as UK research councils or NIH may have pots of money you can apply for: the sums are typically small and applying for the money is not onerous. Even if a society doesn't have a grants stream for meetings, they may be willing to sponsor places for specific categories of attendees: early-career people or those from resource-poor countries.

Local businesses or publishers are often willing to sponsor things like conference bags, in return for showing their logo. You can often charge publishers for a stand.

Registration

Once you have thought through the items under Expenditure, and have an idea of whether you'll have grant income, you will be in a good position to work out what you need to charge those attending to cover your costs. The ideal is to break even, but it's important not to overspend and so you should estimate how many people are likely to register in each category, and work out a registration fee that will cover this, even if numbers are disappointing.

What can go wrong?

  • Acts of God. I still remember a meeting at the Royal Society years ago where a hurricane swept across Britain overnight and around 50% of those attending couldn't make it. Other things like strikes, riots, etc. can happen, but I recommend you just accept these are risks not under your control.
  • Clash of dates. This is under your control to some extent. Before you settle on a date, ask around to check there isn't a clash with other meetings or with religious holidays.
  • Speaker pulls out. I have organised meetings where a speaker pulled out at the last minute – there will usually be a good reason for this such as illness. So long as it is one person, this can be managed, and may indeed provide an opportunity to do something useful with the time, such as holding a mini-Hackathon to brainstorm ideas about a specific problem..
  • You make a loss. This is a scary prospect but should not happen with adequate planning, as noted above. Main thing is to make sure you confirm what your speaker expenses will be so you don't get any nasty surprises at the last minute.
  • Difficult people. This is a minor one, but I remember wise words of Betty Byers Brown, a collaborator from those old Manchester days, who told me that 95% of the work of a conference organiser is caused by 5% of those attending. Just knowing that is the case makes it easier to deal with.
  • Unhappy people. People coming from far away who know nobody can have a miserable time at a conference, but with planning, you can help them integrate in a group. Rather than formal entertainment, consider having social activities that ensure everyone is included. Also, have an explicit anti-harassment policy – there are plenty of examples on the web.
  • Criticism. Whatever you do there will be people who complain – why didn't you do X rather than Y?  This can be demoralising if you have put a lot of work into organising something.  Nevertheless, make sure you do ask people for feedback after the meeting: if there are things that could be done better next time, you need to know about them. For what it's worth, the most common complaints I hear after meetings are that speakers go on too long and there is not enough time for questions and discussion. It's important to have firm chairing, and to set up the schedule to encourage interaction.

What can go right?

  • Running a conference carries an element of risk and stress, but it's an opportunity to develop organisational skills, and this can be a great thing to put on your CV. The skills you need to plan a conference are not so different from those to budget for a grant: you have to work out how to optimise the use of funds, anticipating expenses and risks.
  • Bonding with co-organisers. If you pick your co-organisers wisely, you may find that the experience of working together to solve problems is enjoyable and you learn a lot.
  • You can choose the topics for your meeting and get to invite the speakers you most want to hear. As a young researcher organising a small meeting, I got to know people I'd invited as speakers in a way that would not be possible if I was just attending a big meeting organised by a major society.
  • You can do it your way. You can decide if you want to lower costs for specific groups. You can make sure that the speakers are diverse, and can experiment with different approaches to get away from the traditional format of speakers delivering a lecture to an audience. For examples see this post and comments below it.
  • The main thing is that if you are in control, you can devise your meeting to ensure it achieves what scientific meetings are supposed to achieve: scholarly communication and interaction to spark ideas and collaborations. My memories of meetings I have organised as an early-career academic have been high points in my career, which is why I am so keen to encourage others to do this.

But! .... Climate emergency

The elephant in this particular room is air travel. Academics are used to zipping around the world to go to conferences, at a time when we are increasingly recognising the harm this is doing to our planet. My only justification for writing this post at the current time is that it may encourage people to go to smaller, more-focused meetings. But I'm trying to cut down on air travel substantially and in the longer term, suspect that we will need to move to virtual meetings.

Groups of younger researchers, and those from outside Europe and the UK, have a role to play in working out how to do this. I hope to encourage this by urging people to be bold and to venture outside the big conference arenas where junior people and those from marginalised groups can feel they are invisible. Organising a small meeting teaches you a lot of necessary skills that may be used in devising more radical formats. The future of conferences is going to change and you need to be shaping it.

-->

Monday, 15 April 2019

Review of 'Innate' by Kevin Mitchell


Innate: How the Wiring of Our Brains Shapes Who We Are.  Kevin J. Mitchell. Princeton, New Jersey, USA: Princeton University Press, 2018, 293 pages, hardcover. ISBN: 978-0-691-17388-7.

This is a preprint of a review written for the Journal of Mind and Behavior.

Most of us are perfectly comfortable hearing about biological bases of differences between species, but studies of biological bases of differences between people can make us uneasy. This can create difficulties for the scientist who wants to do research on the way genes influence neurodevelopment: if we identify genetic variants that account for individual differences in brain function, then it is may seem a small step to concluding that some people are inherently more valuable than others.  And indeed in 2018 we have seen calls for use of polygenic risk scores to select embryos for potential educational attainment (Parens et al, 2019). There has also been widespread condemnation of the first attempt to create a genetically modified baby using CRISPR technology (Normile, 2018), with the World Health Organization responding by setting up an advisory committee to develop global standards for governance of human genome editing (World Health Organization, 2019).
Kevin Mitchell's book Innate is essential reading for anyone concerned about the genetics behind these controversies. The author is a superb communicator, who explains complex ideas clearly without sacrificing accuracy. The text is devoid of hype and wishful thinking, and it confronts the ethical dilemmas raised by this research area head-on. I'll come back to those later, but will start by summarising Mitchell's take on where we are in our understanding of genetic influences on neurodevelopment.
Perhaps one of the biggest mistakes that we've made in the past is to teach elementary genetics with an exclusive focus on Mendelian inheritance. Mendel and his peas provided crucial insights into units of inheritance, allowing us to predict precisely the probabilities of different outcomes in offspring of parents through several generations.  The discovery of DNA provided a physical instantiation of the hitherto abstract gene, as well as providing insight into mechanisms of inheritance.  During the first half of the 20th century it became clear that there are human traits and diseases that obey Mendelian laws impeccably: blood groups, Huntington's disease, and cystic fibrosis, to name but a few. The problem is that many intelligent laypeople assume that this is how genetics works in general. If a condition is inherited, then the task is to track down the gene responsible.  And indeed, 40 years ago, many researchers took this view, and set out to track genes for autism, hearing loss, dyslexia and so on.  Ben Goldacre's (2014) comment 'I think you'll find it's a bit more complicated than that' was made in a rather different context, but is a very apt slogan to convey where genetics finds itself in 2019.  Here are some of the key messages that the author conveys, with clarity and concision, which provide essential background to any discussion of ethical implications of research.
1. Genes are not a blueprint
The same DNA does not lead to identical outcomes. We know this from the study of inbred animals, from identical human twins, and even from studying development of the two sides of the body in a single person. How can this be? DNA is a chemically inert material, which carries instructions for how to build a body from proteins in a sequence of bases. Shouldn't two organisms with identical DNA should turn out the same? The answer is no, because DNA can in effect be switched on and off: that's how it is possible for the same DNA to create a wide variety of different cell types, depending on which proteins are transcribed and when. As Mitchell puts it: "While DNA just kind of sits there, proteins are properly impressive – they do all sorts of things inside cells, acting like tiny molecular machines or robots, carrying out tens of thousands of different functions." DNA is chemically stable, but messenger RNA, which conveys the information to the cell where proteins are produced, is much less so. Individual cells transcribe messenger RNA in bursts. There is variability in this process, which can lead to differences in development.
2. Chance plays an important role in neurodevelopment
Consideration of how RNA functions leads to an important conclusion: factors affecting neurodevelopment can't just be divided into genetic vs. environmental influences: random fluctuations in the transcription process mean that chance also plays a role. 
Moving from the neurobiological level, Mitchell notes that the interpretation of twin studies tends to ignore the role of chance. When identical (monozygotic or MZ) twins grow up differently, this is often attributed to the effects of 'non-shared environment', implying there may have been some systematic differences in their experiences, either pre- or post-natal, that led them to differ. But, such effects don't need to be invoked to explain why identical twins can differ: this can arise because of random effects operating at a very early stage of neurodevelopment.
3. Small initial differences can lead to large variation in outcome
If chance is one factor overlooked in many accounts of genetics, development is the other. There are interactions between proteins, such that when messenger RNA from gene A reaches a certain level, this will increase expression of genes B and C.  Those genes in turn can affect others in a cascading sequence. This mechanism can amplify small initial differences to create much larger effects.
4. Genetic is not the same as heritable
Genetic variants that influence neurodevelopment can be transmitted in the DNA passed from parent to child leading to heritable disorders and traits.  But many genetically-based neurodevelopmental disorders do not work like this; rather, they are caused by 'de novo' mutations, i.e. changes to DNA that arise early in embryogenesis, and so are not shared with either parent.
5. We all have many mutations
The notion that there is a clear divide between 'normal people' with a nice pure genome and 'disordered' people with mutations is a fiction. All of us have numerous copy number variants (CNVs), chunks of DNA that are deleted or duplicated (Beckmann, Estivill, & Antonarakis, 2007), as well as point mutations, - i.e. changes in a single base pair of DNA. When the scale of mutation in 'normal' people was first discovered, it created quite a shock to the genetics community, jamming a spanner in the works for researchers trying to uncover causes of specific conditions.  If we find a rare CNV or point mutation in a person with a disorder, it could just be coincidence and not play any causal role. Converging evidence is needed. Studies of gene function can help establish causality; the impact on brain development will depend on whether a mutation affects key aspects of protein synthesis; but even so, there have been cases where a mutation thought to play a key role in disorder then pops up in someone whose development is entirely unremarkable. A cautionary tale is offered by Toma et al (2018), who studied variants in CNTNAP2, a gene that was thought to be related to autism and schizophrenia. They found that the burden of rare variants that disrupted gene function were just as high in individuals from the general population as in people with autism or schizophrenia.
6. One gene – one disorder is the exception rather than the rule
For many neurodevelopmental conditions, e.g. autism, intellectual disability, and epilepsy, associated mutations have been tracked down. But most of them account for only a small proportion of affected individuals, and furthermore, the same mutation is typically associated with different disorders.  Our diagnostic categories don't map well onto the genes.
This message is of particular interest to me, as I have been studying the impact of a major genetic change – presence of an extra X or Y chromosome - on children's development: this includes girls with an additional X chromosome ( trisomy X ), boys with an extra X (XXY or Klinefelter's syndrome) and boys with an extra Y (XYY constitution). The impact of an extra sex chromosome is far less than you might expect: most of these children attend mainstream school and live independently as adults. There has been much speculation about possible contrasting effects of an extra X versus extra Y chromosome. However, in general, one finds that variation within a particular trisomy group is far greater than variation between them. So, with all three types of trisomy, there is an increased likelihood that the child with have educational difficulties, language and attentional problems, and there's also a risk of social anxiety. In a minority of cases the child meets criteria for autism or intellectual disability (Wilson, King & Bishop, 2019). The range of outcomes is substantial – something that makes it difficult to advise parents when the trisomy is discovered. The story is similar for some other mutations: there are cases where a particular gene is described as an 'autism gene', only for later studies to find that individuals with the same mutation may have attention deficit hyperactivity disorder, epilepsy, language disorder, intellectual disability – or indeed, no diagnosis at all.  For instance, Niarchou et al (2019) published a study of a sample of children with deletion or duplication at a site on chromosome 16 (16p11.2), predicting that the deletion would be associated with autism, and duplication with autism or schizophrenia. In fact, they found that the commonest diagnosis with both conditions was attention deficit hyperactivity disorder, though rates of intellectual disability and autism were also increased. 52% of the cases with deletion and 37% of those with a duplication had no psychiatric diagnosis.
There are several ways in which such variation in outcomes might arise. First, the impact of a particular mutation may depend on the genetic background – for instance, if the person has another mutation affecting the same neural circuits, this 'double hit' may have a severe impact, whereas either mutation alone would be innocuous. A second possibility is that there may be environmental factors that affect outcomes. There is a lot of interest in this idea because it opens up potential for interventions. The third option, though, is the one that is often overlooked: the possibility that differences in outcomes are the consequence of random factors early in neurodevelopment, which then have cascading effects that amplify initial minor differences (see points 2 and 3).
6. A mutation may create general developmental instability
Many geneticists think of effects of mutations in terms of the functional impact on particular developmental processes. In the case of neurodevelopment, there is interest in how genes affect processes such as neuronal migration (movement of cells to their final position in the brain), synaptic connectivity (affecting communication between cells) or myelination (formation of white matter sheaths around nerve fibres).  Mitchell suggests, however, that mutations may have more general effects, simply making the brain less able to adapt to disruptive processes in development.  Many of us learn about genetics in the context of conditions like Huntington's disease, where a specific mutation leads to a recognisable syndrome. However, for many neurodevelopmental conditions, the impact of a mutation is to increase the variation in outcomes.  This makes sense of the observations outlined in point 5: a mutation can be associated with a range of developmental disabilities, but with different conditions in different people.
7. Sex differences in risk for neurodevelopmental disorders have genetic origins
There has been so much exaggeration and bad science in research on sex differences in the brain, that it has become popular to either deny their existence, or attribute them to sex differences in environmental experiences of males and females. Mitchell has no time for such arguments. There is ample evidence from animal studies that both genes and hormones affect neurodevelopment: why should humans be any different? But he adds two riders: first, although systematic sex differences can be found in human brains, they are small enough to be swamped by individual variation within each sex. So if you want to know about the brain of an individual, their sex would not tell you very much. And second, different does not mean inferior.
Mitchell argues that brain development is more variable in males than females and he cites evidence that, while average ability scores are similar for males and females, males show more variation and are overrepresented at the extremes of distributions of ability. The over-representation at the lower end has been recognised for many years and is at least partly explicable in terms of how the sex chromosomes operate. Many syndromes of intellectual disability are X-linked, which means they are caused by a mutation of large effect on the X chromosome. The mother of an affected boy often carries the same mutation but shows no impairment: this is because she has two X chromosomes, and the effect of a mutation on one of them is compensated for by the unaffected chromosome. The boy has XY chromosome constitution, with the Y being a small chromosome with few genes on it, and so the full impact of an X-linked mutation will be seen. Having said that, many conditions with a male preponderance, such as autism and developmental language disorder,  do not appear to involve X-linked genes, and some disorders, such as depression, are more common in females, so there is still much we need to explain. Mitchell's point is that we won't make progress in doing so by denying a role for sex chromosomes or hormones in neurodevelopment.  
Mitchell moves into much more controversial territory in describing studies showing over-representation of males at the other end of the ability distribution: e.g. in people with extraordinary skills in mathematics. That is much harder to account for in terms of his own account of genetic mechanisms, which questions the existence of genetic variants associated with high ability. I have not followed that literature closely enough to know how solid the evidence of male over-representation is, but assuming it is reliable, I'd like to see studies that looked more broadly at other aspects of cognition of males who had spectacular ability in domains such as maths or chess. The question is how to reconcile such findings with  Mitchell's position – which he summarises rather bluntly by saying there are no genes for intelligence, only genes for stupidity. He does suggest that greater developmental instability in males might lead to some cases of extremely high-functioning, but that is at odds with his general view that instability generally leads to deficits, not strengths. I'd be interested in studies of these exceptional high achievers to look at their skills across a wider range of domains. Is it really the case that males at the very top end of the IQ distribution are uniformly good at everything, or are there compensating deficits? It's easy to think of anecdotal examples of geniuses who were lacking in what we might term social intelligence, and whose ability to flourish was limited to a very restricted ecological niche in the groves of academe. Maybe these are people whose specific focus on certain topics would have been detrimental to reproductive fitness in our ancestors, but who can thrive in modern society where people are able to pursue exceptionally narrow interests.  If so, we can predict that at the point in the distribution where exceptional ability has a strong male bias, we should expect to find that the skill is highly specific and accompanied by limitations in other domains of cognition or behaviour.
8. It is difficult to distinguish polygenic effects from genetic heterogeneity
Way back in the early 1900s, there was criticism of Mendelian genetics because it maintained that genetic material was transmitted in quanta, and so it seemed not to be able to explain inheritance of continuous traits such as height, where the child's phenotype may be intermediate between those of parents. Reconciliation of these positions was achieved by Ronald Fisher, who showed that if a phenotype was influenced by the combined impact of many genes of small effect, we would expect correlations between related individuals in continuous traits. This polygenic view of inheritance is thought to apply to many common traits and disorders. If so, then the best way to discover genetic bases for disorder is not to hunt through the genome looking for rare mutations, but rather to search for common variants of small effect. The problem with that is that on the one hand it requires enormous samples to identify tiny effects, and on the other it's easy to find false positive associations. The method of Genome Wide Association has been developed to address these issues, and has had some success in identifying genetic variants that have little effect in isolation, but which in aggregate play a role in causing disorder.
Mitchell, however, has a rather different approach. At a time when most geneticists were embracing the idea that conditions such as schizophrenia and autism were the result of the combined effect of the tiny influence of numerous common genetic variants, Mitchell (2012) argued for another possibility - that we may be dealing with rare variants of large effect, which differ from family to family. In Innate, he suggests it is a mistake to reduce this to an either/or question: a person's polygenic background may establish a degree of risk for disorder, with specific mutations then determining how far that risk is manifest.
This is not just an academic debate: it has implications for how we invest in science, and for clinical applications of genetics. Genome-wide association studies need enormous samples, and collection, analysis and storage of data is expensive. There have been repeated criticisms that the yield of positive findings has been low and they have not given good value for money. In particular, it's been noted that the effects of individual genetic variants are minuscule, can only be detected in enormous samples, and throw little light on underlying mechanisms (Turkheimer, 2012, 2016). This has led to a sense of gloom that this line of work is unlikely to provide any explanations of disorder or improvements in treatment.
An approach that is currently in vogue is to derive a Polygenic Risk Score, which is based on all the genetic variants associated with a condition, weighted by the strength of association. This can give some probabilistic information about likelihood of a specific phenotype, but for cognitive and behavioural phenotypes, the level of prediction is not impressive.  The more data is obtained on enormous samples, the better the prediction becomes, and some scientists predict that Polygenic Risk Scores will become accurate enough to be used in personalised medicine or psychology. Others, though, have serious doubts.  A thoughtful account of the pros and cons of Polygenic Risk Scores is found in an interview that Ed Yong (2018) had with Daniel Benjamin, one of the authors of a recent study reporting on Polygenic Risk Scores for educational attainment (Lee et al, 2018). Benjamin suggested that predicting educational attainment from genes is a non-starter, because prediction for individuals is very weak. But he suggested that the research has value as we can use a Polygenic Risk Score as a covariate to control for genetic variation when studying the impact of environmental interventions. However, this depends on results generalising to other samples. It is noteworthy that when the Polygenic Risk Score for educational attainment was tested for its ability to explain within-family variation (in siblings), its predictive power dropped (Lee et al, 2018).
It is often argued that knowledge of genetic variants contributing to a Polygenic Risk Score will help identify the functions controlled by the relevant genes, which may lead to new discoveries in developmental neurobiology and drug design. However, others would question whether Polygenetic Risk Scores have the necessary biological specificity to fulfil this promise (Reimers et al, 2018). Furthermore, recent papers have raised concerns that population stratification means that Polygenetic Risk Scores may give misleading results: for instance, we might be able to find a group of SNPs predictive of 'chopsticks-eating skills', but this would just be based on genetic variants that happen to differ between ethnic groups that do and don't eat with chopsticks (Barton et al, 2019).
I think Mitchell would in any case regard the quest for Polygenic Risk Scores as a distraction from other more promising approaches that focus on finding rare variants of big effect. Rather than investing in analyses that require huge amounts of big data to detect marginal associations between phenotypes and SNPs, his view is that we will make most progress by studying the consequences of mutations. The tussle between these viewpoints is reflected in two articles that appeared at the end of 2017. Boyle, Li, and Pritchard (2017) queried some of the assumptions behind genome-wide association studies, and suggested that most progress will occur if we focus on detecting rare variants that may help understand the biological pathways involved in disorder. Wray et al (2017) countered by arguing that while exploring for de novo mutations is important for understanding severe childhood disorders, this approach is unlikely to be cost-effective when dealing with common diseases, where genome-wide associations with enormous samples is the optimal strategy. In fact,  the positions of these authors are not diametrically opposed: it is rather a question of which approach should be given most resources. The discussion involves more than just scientific disagreement: reputations and large amounts of research funding are at stake.
Ethical implications
And so we come to the ethical issues around modern genetics. I hope I have at least convinced readers that in order to have a rational analysis of moral questions in this field, one needs to move away from simplistic ideas of the genome as some kind of blueprint that determines brain structure and function. Ethical issues which are quite hard enough when things are deterministic are given a whole new layer of complexity when we realise that there's a large contribution of chance in most relationships between genes and neurodevelopment.
But let's start with the simpler and more straightforward case where you can reliably predict how a person will turn out from knowledge of their genetic constitution. There are then two problematic issues to grapple with: 1) if you have knowledge of genetic constitution prenatally, under what situations would you consider using the information to select an embryo or terminate a pregnancy? 2) if a person with a genetically-determined condition exists, should they be treated differently on the basis of that condition?  
Some religions bypass the first question altogether, by arguing that it is never acceptable to terminate a pregnancy. But, if we put absolutist positions to one side, I suspect most people would give a range of answers to question 1, depending on what the impact of the genetic condition is:  termination may be judged acceptable or even desirable if there are such severe impacts on the developing brain that the infant would be unlikely to survive into childhood, be in a great deal of distress or pain, or be severely mentally impaired. At the other extreme, terminating a pregnancy because a person lacks a Y chromosome seems highly unethical to many people, yet this practice is legal in some countries, and widely adopted even when it is not (Hvistendahl, 2011). These polarised scenarios may seem relatively straightforward, but there are numerous challenges because there will always be cases that fall between these extremes.
It is impossible to ignore the role of social factors in our judgements. Many hearing people are shocked when they discover that some Deaf parents want to use reproductive technologies to select for Deafness in their child (Mand et al., 2009), but those who wish to adopt such a practice argue that Deafness is a cultural difference rather than a disability.
Now let's add chance into the mix. Suppose you have a genetic condition that makes it more likely that a child will have learning difficulties or behaviour problems, but the range of outcomes is substantial; the typical outcome is mild educational difficulties, and many children do perfectly well.  This is exactly the dilemma facing parents of children who are found on prenatal screening to have an extra X or Y chromosome.  In many countries parents may be offered a termination of pregnancy in such cases, but it is clear that whether or not they decide to continue with the pregnancy depends on what they are told about potential outcomes (Jeon, Chen, & Goodson, 2012). 
Like Kevin Mitchell, I don't have easy solutions to such dilemmas, but like him, I think that we need to anticipate that such thorny ethical questions are likely to increase as our knowledge of genetics expands – with many if not most genetic influences being probabilistic rather than deterministic. The science fiction film Gattaca portrays a chilling vision of a world where genetic testing at birth is used to identify elite individuals who will have the opportunity to be astronauts, leaving those with less optimal alleles to do menial work – even though prediction is only probabilistic, and those with 'invalid' genomes may have desirable traits that were not screened for. The Gattaca vision is bleak not just because of the evident unfairness of using genetic screening to allocate resources to people, but because a world inhabited by a set of clones, selected for perfection on a handful of traits, could wipe out the diversity that makes us such a successful species.
There's another whole set of ethical issues that have to do with how we treat people who are known to have genetic differences. Suppose we find that someone standing trial has a genetic mutation that is known to be associated with aggressive outbursts. Should this genetic information be used in mitigation for criminal behaviour? Some might say this would be tantamount to letting a criminal get away with antisocial behaviour, whereas others may regard it as unethical to withhold this information from the court. The problem, again, becomes particularly thorny because association between genetic variation and aggression is always probabilistic.  Is someone with a genetic variant that confers a 50% increase in risk of aggression less guilty than someone with a different variant that makes then 50% less likely to be aggressive? Of course, it could be argued that the most reliable genetic predictor of criminality is having a Y chromosome, but we do not therefore treat male criminals more leniently than females.  Rather, we recognise that genetic constitution is but one aspect of an individual's make-up, and that factors that lead a person to commit a crime go far beyond their DNA sequence.
As we gain ever more knowledge of genetics, the ethical challenges raised by our ability to detect and manipulate genetic variation need to be confronted. To do that we need an up-to-date and nuanced understanding of the ways in which genes influence neurodevelopment and ultimately affect behaviour. Innate provides exactly that.
Acknowledgement
I thank David Didau for comments on a draft version of this review, and in particular for introducing me to Gattaca.
References
Barton, N., Hermisson, J., & Nordborg, M. (2019). Population genetics: Why structure matters. eLife, 8, e45380. doi:10.7554/eLife.45380
Beckmann, J. S., Estivill, X., & Antonarakis, S. E. (2007). Copy number variants and genetic traits: closer to the resolution of phenotypic to genotypic variability. Nature Reviews Genetics, 8(8), 639-646.
Boyle, E. A., Yang, I. L., & Pritchard, J. K. (2017). An expanded view of complex traits: From polygenic to omnigenic. Cell, 169(7), 1177-1186.

Goldacre, B. (2014). I think you'll find it's a bit more complicated than that. London, UK: Harper Collins.
Hvistendahl, M. (2011). Unnatural Selection: Choosing Boys Over Girls, and the Consequences of a World Full of Men. New York: Public Affairs.
Jeon, K. C., Chen, L.-S., & Goodson, P. (2012). Decision to abort after a prenatal diagnosis of sex chromosome abnormality: a systematic review of the literature. Genetics in Medicine, 14, 27-38.
Mand, C., Duncan, R. E., Gillam, L., Collins, V., & Delatycki, M. B. (2009). Genetic selection for deafness: the views of hearing children of deaf adults. Journal of Medical Ethics, 35(12), 722-728. doi:http://dx.doi.org/10.1136/jme.2009.030429
Mitchell, K. J. (2012). What is complex about complex disorders? Genome Biology, 13, 237.
Niarchou, M., Chawner, S. J. R. A., Doherty, J. L., Maillard, A. M., Jacquemont, S., Chung, W. K., . . . van der Bree, M. B. M. (2019). Psychiatric disorders in children with 16p11.2 deletion and duplication. Translational Psychiatry 9(8). doi:10.1038/s41398-018-0339-8
Normile, D. (2018). Shock greets claim of CRISPR-edited babies. Science, 362(6418), 978-979. doi:10.1126/science.362.6418.978
Parens, E., Appelbaum, P., & Chung, W. (2019). Embryo editing for higher IQ is a fantasy. Embryo profiling for it is almost here. Stat+(Feb 12 2019).
Reimers, M. A., Craver, C., Dozmorov, M., Bacanu, S. A., & Kendler, K. S. (2018). The coherence problem: Finding meaning in GWAS complexity. Behavior Genetics. doi:https://doi.org/10.1007/s10519-018-9935-x
Toma, C., Pierce, K. D., Shaw, A. D., Heath, A., Mitchell, P. B., Schofield, P. R., & Fullerton, J. M. (2018). Comprehensive cross-disorder analyses of CNTNAP2 suggest it is unlikely to be a primary risk gene for psychiatric disorders. Bioarxiv. doi:https://doi.org/10.1101/363846
Turkheimer, E. (2012). Genome Wide Association Studies of behavior are social science. In K. S. Plaisance & T. A. C. Reydon (Eds.), Philosophy of Behavioral Biology, 43 Boston Studies in the Philosophy of Science 282, DOI 10.1007/978-94-007-1951-4_3, (pp. 43-64): Springer Science+Business Media.
Turkheimer, E. (2016). Weak genetic explanation 20 years later: Reply to Plomin et al (2016). Perspectives on Psychological Science, 11(1), 24-28. doi:10.1177/1745691615617442
World Health Organization (2019). WHO establishing expert panel to develop global standards for governance and oversight of human genome editing. https://www.who.int/ethics/topics/human-genome-editing/en/.
Wray, N. R., Wijmenga, C., Sullivan, P. F., Yang, J., & Visscher, P. M. (2018). Common disease Is more complex than implied by the core gene omnigenic model. Cell, 173, 1573-1590. doi:10.1016/j.cell.2018.05.051
Yong, E. (2018). An enormous study of the genes related to staying in school. The Atlantic. https://www.theatlantic.com/science/archive/2018/07/staying-in-school.../565832/
--> -->
-->