Saturday, 10 August 2019

A day out at 10 Downing Street

Yesterday, I attended a meeting at 10, Downing Street with Dominic Cummings, special advisor to Boris Johnson, for a discussion about science funding. I suspect my invitation will be regarded, in hindsight, as a mistake, and I hope some hapless civil servant does not get into trouble over it. I discovered that I was on the invitation list because of a recommendation by the eminent mathematician, Tim Gowers, who is someone who is venerated by Cummings. Tim wasn't able to attend the meeting, but apparently he is a fan of my blog, and we have bonded over a shared dislike of the evil empire of Elsevier.  I had heard that Cummings liked bold, new ideas, and I thought that I might be able to contribute something, given that science funding is something I have blogged about. 

The invitation came on Tuesday and, having confirmed that it was not a spoof, I spent some time reading Cummings' blog, to get a better idea of where he was coming from. The impression is that he is besotted with science, especially maths and technology, and impatient with bureaucracy. That seemed promising common ground.

The problem, though, is that as a major facilitator of Brexit in 2016, who is now persisting with the idea that Brexit must be achieved "at any cost", he is doing immense damage, because science transcends national boundaries. Don't just take my word for it: it's a message that has been stressed by the President of the Royal Society, the Government's Chief Scientific Advisor, the Chair of the Wellcome Trust, the President of the Academy of Medical Sciences, and the Director of the Crick Institute, among others. 

The day before the meeting, I received an email to say that the topic of discussion would be much narrower than I had been led to believe. The other invitees were four Professors of Mathematics and the Director of the Engineering and Physical Sciences Research Council. We were sent a discussion document written by one of the professors outlining a wish list for improvements in funding for academic mathematics in the UK. I wasn't sure if I was a token woman: I suspect Cummings doesn't go in for token women and that my invite was simply because it had been assumed that someone recommended by Gowers would be a mathematician. I should add that my comments here are in a personal capacity and my views should not be taken as representing those of the University of Oxford.

The meeting started, rather as expected, with Cummings saying that we would not be talking about Brexit, because "everyone has different views about Brexit" and it would not be helpful. My suspicion was that everyone around the table other than Cummings had very similar views about Brexit, but I could see that we'd not get anywhere arguing the point. So we started off feeling rather like a patient who visits a doctor for medical advice, only to be told "I know I just cut off your leg, but let's not mention that."

The meeting proceeded in a cordial fashion, with Cummings expressing his strong desire to foster mathematics in British universities, and asking the mathematicians to come up with their "dream scenario" for dramatically enhancing the international standing of their discipline over the next few years. As one might expect, more funding for researchers at all levels, longer duration of funding, plus less bureaucracy around applying for funding were the basic themes, though Brexit-related issues did keep leaking in to the conversation – everyone was concerned about difficulties of attracting and retaining overseas talent, and about loss of international collaborations funded by the European Research Council. Cummings was clearly proud of the announcement on Thursday evening about easing of visa restrictions on overseas scientists, which has potential to go some way towards mitigating some of the problems created by Brexit. I felt, however, that he did not grasp the extent to which scientific research is an international activity, and breakthroughs depend on teams with complementary skills and perspectives, rather than the occasional "lone genius".  It's not just about attracting "the very best minds from around the world" to come and work here.

Overall, I found the meeting frustrating. First, I felt that Cummings was aware that there was a conflict between his twin aims of pursuit of Brexit and promotion of science, but he seemed to think this could be fixed by increasing funding and cutting regulation. I also wonder where on earth the money is coming from. Cummings made it clear that any proposals would need Treasury approval, but he encouraged the mathematicians to be ambitious, and talked as if anything was possible. In a week when we learn the economy is shrinking for the first time in years, it's hard to believe he has found the forest of magic money trees that are needed to cover recent spending announcements, let alone additional funding for maths.

Second, given Cummings' reputation, I had expected a far more wide-ranging discussion of different funding approaches. I fully support increased funding for fundamental mathematics, and did not want to cut across that discussion, so I didn't say much. I had, however, expected a bit more evidence of creativity. In his blog, Cummings refers to the Defense Advanced Research Projects Agency (DARPA), which is widely admired as a model for how to foster innovation. DARPA was set up in 1958 with the goal of giving the US superiority in military and other technologies. It combined blue-skies and problem-oriented research, and was immensely successful, leading to the development of the internet, among other things. In his preamble, Cummings briefly mentioned DARPA as a useful model. Yet, our discussion was entirely about capacity-building within existing structures.

Third, no mention was made of problem-oriented funding. Many scientists dislike having governments control what they work on, and indeed, blue-skies research often generates quite unexpected and beneficial outcomes. But we are in a world with urgent problems that would benefit from focussed attention of an interdisciplinary, and dare I say it, international group of talented scientists. In the past, it has taken world wars to force scientists to band together to find solutions to immediate threats. The rapid changes in the Arctic suggest that the climate emergency should be treated just like a war - a challenge to be tackled without delay. We should be deploying scientists, including mathematicians, to explore every avenue to mitigating the effects of global heating – physical and social – right now. Although there is interesting research on solar geoengineering going on at Harvard, it is clear that, under the Trump administration, we aren't going to see serious investment from the USA in tackling global heating. And, in any case, a global problem as complex as climate needs a multi-pronged solution. The economist Marianna Mazzucato understands this: her proposals for mission-oriented research take a different approach to the conventional funding agencies we have in the UK. Yet when I asked whether climate research was a priority in his planning, Cummings replied "it's not up to me". He said that there were lots of people pushing for more funding for research on "climate change or whatever", but he gave the impression that it was not something he would give priority to, and he did not display a sense of urgency. That's surprising in someone who is scientifically literate and has a child.

In sum, it's great that we have a special advisor who is committed to science. I'm very happy to see mathematics as a priority funding area. But I fear Dominic Cummings overestimates the extent to which he can mitigate the negative consequences of Brexit, and it is particularly unfortunate that his priorities do not include the climate emergency that is unfolding.

Saturday, 3 August 2019

Corrigendum: a word you may hope never to encounter


I have this week submitted a 'corrigendum' to a journal for an article published in the American Journal of Medical Genetics B (Bishop et al, 2006). It's just a fancy word for 'correction', and journals use it contrastively with 'erratum'. Basically, if the journal messes up and prints something wrong, it's an erratum. If the author is responsible for the mistake, it's a corrigendum.

 I'm trying to remember how many corrigenda I've written over the 40 odd years I've been publishing: there have been at least three previous cases that I can remember, but there could be more. I think this one was the worst; previous errors have tended to just affect numbers in a minor way. In this case, a whole table of numbers (table II) was thrown out, and although the main findings were upheld, there were some changes in the details.

I discovered the error when someone asked for the data for a meta-analysis. I was initially worried I would not be able to find the files, but fortunately, I had archived the dataset on a server, and eventually tracked it down. But it was not well-documented, and I then had the task of trawling through a number of cryptically-named files to try and work out which one was the basis for the data in the paper. My brain slowly reconstructed what the variable names meant and I got to the point of thinking I'd better check that this was the correct dataset by rerunning the analysis. Alas, although I could recreate most of what was published, I had the chilling realisation that there was a problem with Table II.

Table II was the one place in the analysis where, in trying to avoid one problem with the data (non-independence), I created a whole new problem (wrong numbers). I had data on siblings of children with autism, and in some cases there were two or three siblings in the family. These days I would have considered using a multilevel model to take family structure into account, but in 2005 I didn't know how to do that, and instead I decided to take a mean value for each family. So if there was one child, I used their score, but if there were 2 or 3, then I averaged them. The N was then the number of families, not the number of children.

And here, dear Reader, is where I made a fatal mistake. I thought the simplest way to do this would be by creating a new column in my Excel spreadsheet which had the mean for each family, computing this by manually entering a formula based on the row numbers for the siblings in that family. The number of families was small enough for this to be feasible, and all seemed well. However, I noticed when I opened the file that I had pasted a comment in red on the top row that said 'DO NOT SORT THIS FILE!'. Clearly, I had already run into problems with my method, which would be totally messed up if the rows were reordered. Despite my warning message to myself, somewhere along the line, it seems that a change was made to the numbering, and this meant that a few children had been assigned to the wrong family. And that's why table II had gremlins in it and needed correcting.

I now know that doing computations in Excel is almost always a bad idea, but in those days, I was innocent enough to be impressed with its computational possibilities. Now I use R, and life is transformed. The problem of computing a mean for each family can be scripted pretty easily, and then you have a lasting record of the analysis, which can be reproduced at any time. In my current projects, I aim to store data with a data dictionary and scripts on a repository such as Open Science Framework, with a link in the paper, so anyone can reconstruct the analysis, and I can find it easily if someone asks for the data. I wish I had learned about this years ago, but at least I can now use this approach with any new data – and I also aim to archive some old datasets as well.

For a journal, a corrigendum is a nuisance: they cost time and money in production costs, and are usually pretty hard to link up to the original article, so it may be seen as all a bit pointless. This is especially so given that a corrigendum is only appropriate if the error is not major. If an error would alter the conclusions that you'd draw from the data, then the paper will need to retracted. Nevertheless, it is important for the scientific record to be accurate, and I'm pleased to say that the American Journal of Medical Genetics took this seriously. They responded promptly to my email documenting the problem, suggesting I write a corrigendum, which I have now done.

I thought it worth blogging about this to show how much easier my life would have been if I had been using the practices of data management and analysis that I now am starting to adopt. I also felt it does no harm to write about making mistakes, which is usually a taboo subject. I've argued previously that we should be open about errors, to encourage others to report them, and to demonstrate how everyone makes mistakes, even when trying hard to be accurate (Bishop, 2018). So yes, mistakes happen, but you do learn from them.

References 
Bishop, D. V. M. (2018). Fallibility in science: Responding to errors in the work of oneself and others (Commentary). Advances in Methods and Practices in Psychological Science, 1(3), 432-438 doi:10.1177/2515245918776632. (For free preprint see: https://peerj.com/preprints/3486/)

Bishop, D. V. M., Maybery, M., Wong, D., Maley, A., & Hallmayer, J. (2006). Characteristics of the broader phenotype in autism: a study of siblings using the Children's Communication Checklist - 2. American Journal of Medical Genetics Part B (Neuropsychiatric Genetics), 141B, 117-122.

Saturday, 20 July 2019

A call for funders to ban institutions that use grant capture targets

I  caused unease on Twitter this week when I criticised a piece in the Times Higher Education on 'How to win a research grant'. As I explained in a series of tweets, I have no objection to experienced grant-holders sharing their pearls of wisdom with other academics: indeed, I've given my own tips in the past. My objection was to the sentiment behind the lede beneath the headline: "Even in disciplines in which research is inherently inexpensive, ‘grant capture’ is increasingly being adopted as a metric to judge academics and universities. But with success rates typically little better than one in five, rejection is the fate of most applications." I made the observation that it might have been better if the Times Higher had noted that grant capture is a stupid way to evaluate academics.

Science is in trouble when the getting of grant funding is seen as an end in itself rather than a means to the end of doing good research, with researchers rewarded in proportion to how much money they bring in. I've rehearsed the arguments for this view more than once on my blog (see, e.g. here); many of these points were anticipated by Raphael Gillett in 1991, long before 'grant capture' became widespread as an explicit management tool. Although my view is shared by some other senior figures (see, e.g., this piece by John Ioannidis), it is seldom voiced. When I suggested that the best approach to seeking funding was to wait until you had a great idea that you were itching to implement, the patience of my followers snapped. It was clear that to many people working in academia, this view is seen as naive and unrealistic. Quite simply, it's a case of get funded or get fired. When I started out, use of funding success may have been used informally to rate academics, but now it is often explicit, sometimes to the point whereby expected grant income targets are specified.

Encouraging more and more grant submissions is toxic, both for researchers and for science, but everyone feels trapped. So how could we escape from this fix?

I think the solution has to be down to funders. They should be motivated to tackle the problem for several reasons.
  • First, they are inundated with far more proposals than they can fund - to the extent that many of them use methods of "demand management" to stem the tide. 
  • Second, if people are pressurised into coming up with research projects in order to become or remain employed, this is not likely to lead to particularly good research. We might expect quality of proposals to improve if people are encouraged to take time to develop and hone a great idea.
  • Third, although peer review of grants is generally thought to be the best among various unsatisfactory options for selecting grants, it is known to have poor reliability, and there is an element of lottery as to who gets funded. There's a real risk that, with grant capture being used as a metric, many researchers are being lost from the system because they were unlucky rather than untalented. 
  • Fourth, if people are evaluated in terms of the amount of funding they acquire, they will be motivated to make their proposals as expensive as possible: this cannot be in the interests of the funders.
Funders have considerable power in their hands and they can use it to change the culture. This was neatly demonstrated when the Athena SWAN charter started up, originally focused on improving gender equality in STEMM subjects. Institutions paid lip service to it, but there was little action until the Chief Medical Officer, Sally Davies, declared that to be eligible for biomedical funding from NIHR, institutions would have to have a Silver Athena SWAN award.  This raising of the stakes concentrated the minds of Vice Chancellors to an impressive degree.

My suggestion is that major funders such as Research EnglandWellcome Trust and Cancer Research UK could at a stroke improve research culture in the UK by implementing a rule whereby any institution that used grant capture as a criterion for hiring, firing or promotion would be ineligible to host grants. 

Reference
Gillett, R. (1991). Pitfalls in assessing research performance by grant income. Scientometrics, 22(2), 253-263.

Wednesday, 12 June 2019

Bishopblog catalogue (updated 12 June 2019)

Source: http://www.weblogcartoons.com/2008/11/23/ideas/

Those of you who follow this blog may have noticed a lack of thematic coherence. I write about whatever is exercising my mind at the time, which can range from technical aspects of statistics to the design of bathroom taps. I decided it might be helpful to introduce a bit of order into this chaotic melange, so here is a catalogue of posts by topic.

Language impairment, dyslexia and related disorders
The common childhood disorders that have been left out in the cold (1 Dec 2010) What's in a name? (18 Dec 2010) Neuroprognosis in dyslexia (22 Dec 2010) Where commercial and clinical interests collide: Auditory processing disorder (6 Mar 2011) Auditory processing disorder (30 Mar 2011) Special educational needs: will they be met by the Green paper proposals? (9 Apr 2011) Is poor parenting really to blame for children's school problems? (3 Jun 2011) Early intervention: what's not to like? (1 Sep 2011) Lies, damned lies and spin (15 Oct 2011) A message to the world (31 Oct 2011) Vitamins, genes and language (13 Nov 2011) Neuroscientific interventions for dyslexia: red flags (24 Feb 2012) Phonics screening: sense and sensibility (3 Apr 2012) What Chomsky doesn't get about child language (3 Sept 2012) Data from the phonics screen (1 Oct 2012) Auditory processing disorder: schisms and skirmishes (27 Oct 2012) High-impact journals (Action video games and dyslexia: critique) (10 Mar 2013) Overhyped genetic findings: the case of dyslexia (16 Jun 2013) The arcuate fasciculus and word learning (11 Aug 2013) Changing children's brains (17 Aug 2013) Raising awareness of language learning impairments (26 Sep 2013) Good and bad news on the phonics screen (5 Oct 2013) What is educational neuroscience? (25 Jan 2014) Parent talk and child language (17 Feb 2014) My thoughts on the dyslexia debate (20 Mar 2014) Labels for unexplained language difficulties in children (23 Aug 2014) International reading comparisons: Is England really do so poorly? (14 Sep 2014) Our early assessments of schoolchildren are misleading and damaging (4 May 2015) Opportunity cost: a new red flag for evaluating interventions (30 Aug 2015) The STEP Physical Literacy programme: have we been here before? (2 Jul 2017) Prisons, developmental language disorder, and base rates (3 Nov 2017) Reproducibility and phonics: necessary but not sufficient (27 Nov 2017) Developmental language disorder: the need for a clinically relevant definition (9 Jun 2018)

Autism
Autism diagnosis in cultural context (16 May 2011) Are our ‘gold standard’ autism diagnostic instruments fit for purpose? (30 May 2011) How common is autism? (7 Jun 2011) Autism and hypersystematising parents (21 Jun 2011) An open letter to Baroness Susan Greenfield (4 Aug 2011) Susan Greenfield and autistic spectrum disorder: was she misrepresented? (12 Aug 2011) Psychoanalytic treatment for autism: Interviews with French analysts (23 Jan 2012) The ‘autism epidemic’ and diagnostic substitution (4 Jun 2012) How wishful thinking is damaging Peta's cause (9 June 2014) NeuroPointDX's blood test for Autism Spectrum Disorder ( 12 Jan 2019)

Developmental disorders/paediatrics
The hidden cost of neglected tropical diseases (25 Nov 2010) The National Children's Study: a view from across the pond (25 Jun 2011) The kids are all right in daycare (14 Sep 2011) Moderate drinking in pregnancy: toxic or benign? (21 Nov 2012) Changing the landscape of psychiatric research (11 May 2014)

Genetics
Where does the myth of a gene for things like intelligence come from? (9 Sep 2010) Genes for optimism, dyslexia and obesity and other mythical beasts (10 Sep 2010) The X and Y of sex differences (11 May 2011) Review of How Genes Influence Behaviour (5 Jun 2011) Getting genetic effect sizes in perspective (20 Apr 2012) Moderate drinking in pregnancy: toxic or benign? (21 Nov 2012) Genes, brains and lateralisation (22 Dec 2012) Genetic variation and neuroimaging (11 Jan 2013) Have we become slower and dumber? (15 May 2013) Overhyped genetic findings: the case of dyslexia (16 Jun 2013) Incomprehensibility of much neurogenetics research ( 1 Oct 2016) A common misunderstanding of natural selection (8 Jan 2017) Sample selection in genetic studies: impact of restricted range (23 Apr 2017) Pre-registration or replication: the need for new standards in neurogenetic studies (1 Oct 2017) Review of 'Innate' by Kevin Mitchell ( 15 Apr 2019)

Neuroscience
Neuroprognosis in dyslexia (22 Dec 2010) Brain scans show that… (11 Jun 2011)  Time for neuroimaging (and PNAS) to clean up its act (5 Mar 2012) Neuronal migration in language learning impairments (2 May 2012) Sharing of MRI datasets (6 May 2012) Genetic variation and neuroimaging (1 Jan 2013) The arcuate fasciculus and word learning (11 Aug 2013) Changing children's brains (17 Aug 2013) What is educational neuroscience? ( 25 Jan 2014) Changing the landscape of psychiatric research (11 May 2014) Incomprehensibility of much neurogenetics research ( 1 Oct 2016)

Reproducibility
Accentuate the negative (26 Oct 2011) Novelty, interest and replicability (19 Jan 2012) High-impact journals: where newsworthiness trumps methodology (10 Mar 2013) Who's afraid of open data? (15 Nov 2015) Blogging as post-publication peer review (21 Mar 2013) Research fraud: More scrutiny by administrators is not the answer (17 Jun 2013) Pressures against cumulative research (9 Jan 2014) Why does so much research go unpublished? (12 Jan 2014) Replication and reputation: Whose career matters? (29 Aug 2014) Open code: note just data and publications (6 Dec 2015) Why researchers need to understand poker ( 26 Jan 2016) Reproducibility crisis in psychology ( 5 Mar 2016) Further benefit of registered reports ( 22 Mar 2016) Would paying by results improve reproducibility? ( 7 May 2016) Serendipitous findings in psychology ( 29 May 2016) Thoughts on the Statcheck project ( 3 Sep 2016) When is a replication not a replication? (16 Dec 2016) Reproducible practices are the future for early career researchers (1 May 2017) Which neuroimaging measures are useful for individual differences research? (28 May 2017) Prospecting for kryptonite: the value of null results (17 Jun 2017) Pre-registration or replication: the need for new standards in neurogenetic studies (1 Oct 2017) Citing the research literature: the distorting lens of memory (17 Oct 2017) Reproducibility and phonics: necessary but not sufficient (27 Nov 2017) Improving reproducibility: the future is with the young (9 Feb 2018) Sowing seeds of doubt: how Gilbert et al's critique of the reproducibility project has played out (27 May 2018) Preprint publication as karaoke ( 26 Jun 2018) Standing on the shoulders of giants, or slithering around on jellyfish: Why reviews need to be systematic ( 20 Jul 2018) Matlab vs open source: costs and benefits to scientists and society ( 20 Aug 2018)  

Statistics
Book review: biography of Richard Doll (5 Jun 2010) Book review: the Invisible Gorilla (30 Jun 2010) The difference between p < .05 and a screening test (23 Jul 2010) Three ways to improve cognitive test scores without intervention (14 Aug 2010) A short nerdy post about the use of percentiles (13 Apr 2011) The joys of inventing data (5 Oct 2011) Getting genetic effect sizes in perspective (20 Apr 2012) Causal models of developmental disorders: the perils of correlational data (24 Jun 2012) Data from the phonics screen (1 Oct 2012)Moderate drinking in pregnancy: toxic or benign? (1 Nov 2012) Flaky chocolate and the New England Journal of Medicine (13 Nov 2012) Interpreting unexpected significant results (7 June 2013) Data analysis: Ten tips I wish I'd known earlier (18 Apr 2014) Data sharing: exciting but scary (26 May 2014) Percentages, quasi-statistics and bad arguments (21 July 2014) Why I still use Excel ( 1 Sep 2016) Sample selection in genetic studies: impact of restricted range (23 Apr 2017) Prospecting for kryptonite: the value of null results (17 Jun 2017) Prisons, developmental language disorder, and base rates (3 Nov 2017) How Analysis of Variance Works (20 Nov 2017) ANOVA, t-tests and regression: different ways of showing the same thing (24 Nov 2017) Using simulations to understand the importance of sample size (21 Dec 2017) Using simulations to understand p-values (26 Dec 2017) One big study or two small studies? ( 12 Jul 2018)

Journalism/science communication
Orwellian prize for scientific misrepresentation (1 Jun 2010) Journalists and the 'scientific breakthrough' (13 Jun 2010) Science journal editors: a taxonomy (28 Sep 2010) Orwellian prize for journalistic misrepresentation: an update (29 Jan 2011) Academic publishing: why isn't psychology like physics? (26 Feb 2011) Scientific communication: the Comment option (25 May 2011)  Publishers, psychological tests and greed (30 Dec 2011) Time for academics to withdraw free labour (7 Jan 2012) 2011 Orwellian Prize for Journalistic Misrepresentation (29 Jan 2012) Time for neuroimaging (and PNAS) to clean up its act (5 Mar 2012) Communicating science in the age of the internet (13 Jul 2012) How to bury your academic writing (26 Aug 2012) High-impact journals: where newsworthiness trumps methodology (10 Mar 2013)  A short rant about numbered journal references (5 Apr 2013) Schizophrenia and child abuse in the media (26 May 2013) Why we need pre-registration (6 Jul 2013) On the need for responsible reporting of research (10 Oct 2013) A New Year's letter to academic publishers (4 Jan 2014) Journals without editors: What is going on? (1 Feb 2015) Editors behaving badly? (24 Feb 2015) Will Elsevier say sorry? (21 Mar 2015) How long does a scientific paper need to be? (20 Apr 2015) Will traditional science journals disappear? (17 May 2015) My collapse of confidence in Frontiers journals (7 Jun 2015) Publishing replication failures (11 Jul 2015) Psychology research: hopeless case or pioneering field? (28 Aug 2015) Desperate marketing from J. Neuroscience ( 18 Feb 2016) Editorial integrity: publishers on the front line ( 11 Jun 2016) When scientific communication is a one-way street (13 Dec 2016) Breaking the ice with buxom grapefruits: Pratiques de publication and predatory publishing (25 Jul 2017) Should editors edit reviewers? ( 26 Aug 2018)

Social Media
A gentle introduction to Twitter for the apprehensive academic (14 Jun 2011) Your Twitter Profile: The Importance of Not Being Earnest (19 Nov 2011) Will I still be tweeting in 2013? (2 Jan 2012) Blogging in the service of science (10 Mar 2012) Blogging as post-publication peer review (21 Mar 2013) The impact of blogging on reputation ( 27 Dec 2013) WeSpeechies: A meeting point on Twitter (12 Apr 2014) Email overload ( 12 Apr 2016) How to survive on Twitter - a simple rule to reduce stress (13 May 2018)

Academic life
An exciting day in the life of a scientist (24 Jun 2010) How our current reward structures have distorted and damaged science (6 Aug 2010) The challenge for science: speech by Colin Blakemore (14 Oct 2010) When ethics regulations have unethical consequences (14 Dec 2010) A day working from home (23 Dec 2010) Should we ration research grant applications? (8 Jan 2011) The one hour lecture (11 Mar 2011) The expansion of research regulators (20 Mar 2011) Should we ever fight lies with lies? (19 Jun 2011) How to survive in psychological research (13 Jul 2011) So you want to be a research assistant? (25 Aug 2011) NHS research ethics procedures: a modern-day Circumlocution Office (18 Dec 2011) The REF: a monster that sucks time and money from academic institutions (20 Mar 2012) The ultimate email auto-response (12 Apr 2012) Well, this should be easy…. (21 May 2012) Journal impact factors and REF2014 (19 Jan 2013)  An alternative to REF2014 (26 Jan 2013) Postgraduate education: time for a rethink (9 Feb 2013)  Ten things that can sink a grant proposal (19 Mar 2013)Blogging as post-publication peer review (21 Mar 2013) The academic backlog (9 May 2013)  Discussion meeting vs conference: in praise of slower science (21 Jun 2013) Why we need pre-registration (6 Jul 2013) Evaluate, evaluate, evaluate (12 Sep 2013) High time to revise the PhD thesis format (9 Oct 2013) The Matthew effect and REF2014 (15 Oct 2013) The University as big business: the case of King's College London (18 June 2014) Should vice-chancellors earn more than the prime minister? (12 July 2014)  Some thoughts on use of metrics in university research assessment (12 Oct 2014) Tuition fees must be high on the agenda before the next election (22 Oct 2014) Blaming universities for our nation's woes (24 Oct 2014) Staff satisfaction is as important as student satisfaction (13 Nov 2014) Metricophobia among academics (28 Nov 2014) Why evaluating scientists by grant income is stupid (8 Dec 2014) Dividing up the pie in relation to REF2014 (18 Dec 2014)  Shaky foundations of the TEF (7 Dec 2015) A lamentable performance by Jo Johnson (12 Dec 2015) More misrepresentation in the Green Paper (17 Dec 2015) The Green Paper’s level playing field risks becoming a morass (24 Dec 2015) NSS and teaching excellence: wrong measure, wrongly analysed (4 Jan 2016) Lack of clarity of purpose in REF and TEF ( 2 Mar 2016) Who wants the TEF? ( 24 May 2016) Cost benefit analysis of the TEF ( 17 Jul 2016)  Alternative providers and alternative medicine ( 6 Aug 2016) We know what's best for you: politicians vs. experts (17 Feb 2017) Advice for early career researchers re job applications: Work 'in preparation' (5 Mar 2017) Should research funding be allocated at random? (7 Apr 2018) Power, responsibility and role models in academia (3 May 2018) My response to the EPA's 'Strengthening Transparency in Regulatory Science' (9 May 2018) More haste less speed in calls for grant proposals ( 11 Aug 2018) Has the Society for Neuroscience lost its way? ( 24 Oct 2018) The Paper-in-a-Day Approach ( 9 Feb 2019) Benchmarking in the TEF: Something doesn't add up ( 3 Mar 2019) The Do It Yourself conference ( 26 May 2019)  

Celebrity scientists/quackery
Three ways to improve cognitive test scores without intervention (14 Aug 2010) What does it take to become a Fellow of the RSM? (24 Jul 2011) An open letter to Baroness Susan Greenfield (4 Aug 2011) Susan Greenfield and autistic spectrum disorder: was she misrepresented? (12 Aug 2011) How to become a celebrity scientific expert (12 Sep 2011) The kids are all right in daycare (14 Sep 2011)  The weird world of US ethics regulation (25 Nov 2011) Pioneering treatment or quackery? How to decide (4 Dec 2011) Psychoanalytic treatment for autism: Interviews with French analysts (23 Jan 2012) Neuroscientific interventions for dyslexia: red flags (24 Feb 2012) Why most scientists don't take Susan Greenfield seriously (26 Sept 2014) NeuroPointDX's blood test for Autism Spectrum Disorder ( 12 Jan 2019)

Women
Academic mobbing in cyberspace (30 May 2010) What works for women: some useful links (12 Jan 2011) The burqua ban: what's a liberal response (21 Apr 2011) C'mon sisters! Speak out! (28 Mar 2012) Psychology: where are all the men? (5 Nov 2012) Should Rennard be reinstated? (1 June 2014) How the media spun the Tim Hunt story (24 Jun 2015)

Politics and Religion
Lies, damned lies and spin (15 Oct 2011) A letter to Nick Clegg from an ex liberal democrat (11 Mar 2012) BBC's 'extensive coverage' of the NHS bill (9 Apr 2012) Schoolgirls' health put at risk by Catholic view on vaccination (30 Jun 2012) A letter to Boris Johnson (30 Nov 2013) How the government spins a crisis (floods) (1 Jan 2014) The alt-right guide to fielding conference questions (18 Feb 2017) We know what's best for you: politicians vs. experts (17 Feb 2017) Barely a good word for Donald Trump in Houses of Parliament (23 Feb 2017) Do you really want another referendum? Be careful what you wish for (12 Jan 2018) My response to the EPA's 'Strengthening Transparency in Regulatory Science' (9 May 2018) What is driving Theresa May? ( 27 Mar 2019)

Humour and miscellaneous Orwellian prize for scientific misrepresentation (1 Jun 2010) An exciting day in the life of a scientist (24 Jun 2010) Science journal editors: a taxonomy (28 Sep 2010) Parasites, pangolins and peer review (26 Nov 2010) A day working from home (23 Dec 2010) The one hour lecture (11 Mar 2011) The expansion of research regulators (20 Mar 2011) Scientific communication: the Comment option (25 May 2011) How to survive in psychological research (13 Jul 2011) Your Twitter Profile: The Importance of Not Being Earnest (19 Nov 2011) 2011 Orwellian Prize for Journalistic Misrepresentation (29 Jan 2012) The ultimate email auto-response (12 Apr 2012) Well, this should be easy…. (21 May 2012) The bewildering bathroom challenge (19 Jul 2012) Are Starbucks hiding their profits on the planet Vulcan? (15 Nov 2012) Forget the Tower of Hanoi (11 Apr 2013) How do you communicate with a communications company? ( 30 Mar 2014) Noah: A film review from 32,000 ft (28 July 2014) The rationalist spa (11 Sep 2015) Talking about tax: weasel words ( 19 Apr 2016) Controversial statues: remove or revise? (22 Dec 2016) The alt-right guide to fielding conference questions (18 Feb 2017) My most popular posts of 2016 (2 Jan 2017) An index of neighbourhood advantage from English postcode data ( 15 Sep 2018) Working memories: A brief review of Alan Baddeley's memoir ( 13 Oct 2018)

Sunday, 26 May 2019

The Do It Yourself (DIY) conference


This blogpost was inspired by a tweet from Natalie Jester, a PhD student at the School for Sociology, Politics and International Studies at the University of Bristol, raising this question:


I agreed with her, noting that the main costs were venue hire and speaker expenses, but that prices were often hiked by organisers using lavish venues and aiming to make a profit from the meeting. I linked to my earlier post about the eye-watering profits that the Society for Neuroscience makes from its meetings.  In contrast, the UK's Experimental Psychology Society uses its income membership fees and the journal to support meetings three times a year, and doesn't even charge a registration fee.

Pradeep Reddy Raamana, a Canadian Open Neuroscience scholar from Toronto responded, drawing my attention to a thread on this very topic from a couple of weeks ago.



There were useful suggestions in the thread, including reducing costs by spending less on luxurious accommodation for organisers, and encouraging PIs to earmark funds for their junior staff to cover their conference attendance costs.

That's all fine, but my suggestion is for a radically different approach, which is to find a small group of 2-3 like-minded people and organise your own conference. I'm sure that people will respond by saying that they have to go to the big society meetings in their field in order to network and promote their research.  There's nothing in my suggestions that would preclude you also doing this (though see climate emergency point below). But I suspect that if you go down the DIY route, you may get a lot more out of the experience than you would by attending a big, swish society conference: both in terms of personal benefits and career prospects.

I'm sure people will want to add to these ideas, but here's my experience, which is based on running various smallish meetings, including being local organiser for occasional EPS meetings over the years. I was also, with Katharine Perera, Gina Conti-Ramsden and Elena Lieven,  a co-organiser of the Child Language Seminar (CLS) in Manchester back in the 1980s.  That is perhaps the best example of a DIY conference, because we had no infrastructure and just worked it out as we went along.  The CLS was a very ad hoc thing: each year, the meeting organisers tried to find someone who was prepared to run the next CLS at their own institution the following year. Despite this informality, the CLS – now with the more appropriate name of Child Language Symposium – is still going strong in 2019. From memory, we had around 120 people from all over the world at the Manchester meeting. Numbers have grown over the years, but in general if you were doing a DIY meeting for the first time, I'd aim to keep it small; no more than 200 people.

The main costs you will incur in organising a meeting are:
  • Venue
  • Refreshments
  • Reception/Conference dinner
  • Expenses for speakers
  • Administrative costs
  • Publicity
Your income to cover these costs will come from:
  • Grants (optional)
  • Registration fees

So the main thing to do at the start is to sit down and do some sums to ensure you will break even. Here's my experiences on each of these categories:

Venue

You do not need to hold the meeting at a swanky hotel. Your university is likely to have conference facilities: check out their rates. Consider what you need in terms of lecture theatre capacity, break-out rooms, rooms for posters/refreshments.  You need to factor in cost of technical support. My advice is you should let people look after their own accommodation: at most just give them a list of places to stay. This massively cuts down on your workload.

Refreshments

The venue should be able to offer teas/coffees. You will probably be astounded at what institutions charge for a cup of instant coffee and a boring biscuit, but I recommend you go with the flow on that one. People do need their coffee breaks, however humble the refreshments.

Reception/Conference dinner

A welcome reception is a good way of breaking the ice on the first evening. It need not be expensive: a few bottles of wine plus water and soft drinks and some nibbles is adequate. You could just find a space to do this and provide the refreshments yourselves: most of the EPS meetings I've been to just have some bottles provided and people help themselves. This will be cheaper than rates from conference organisers.

You don't have to have a conference dinner. They can be rather stuffy affairs, and a torment for shy people who don't know anyone. On the other hand, when they work well, they provide an opportunity to get to know people and chat about work informally. My experience at EPS and CLS is that the easiest way to organise this is to book a local restaurant. They will probably suggest a set meal at a set price, with people selecting options in advance. This will involve some admin work – see below.

Expenses for speakers

For a meeting like CLS there are a small number of invited plenary speakers. This is your opportunity to invite the people you really want to hear from. It's usual to offer economy class travel and accommodation in a good hotel. This does not need to be lavish, but it should have quiet rooms with ensuite bathroom, large, comfortable bed, desk area, sufficient power supply, adequate aircon/heating, and free wifi. Someone who has flown around the world to come to your meeting is not going to remember you fondly if they are put up in a cramped bed and breakfast. I've had some dismal experiences over the years and now check TripAdvisor to make sure I've not been booked in somewhere awful.  I still remember attending a meeting where an eminent speaker had flown in from North America only to find herself put in student accommodation: she turned around and booked herself into a hotel, and left with dismal memories of the organisers.

Pradeep noted that conferences could save costs if speakers covered their own expenses. This is true and many do have funds that they could use for this purpose. But don't assume that is the case: if they do have funds, you'd have to consider why they'd rather spend that money on coming to your meeting, than on something else. A diplomatic way of discussing this is to say in the letter of invitation that you can cover economy class travel, accommodation, dinner and registration. However, if they have funds that could be used for their travel, then that will make it possible to offer some sponsored places to students.

Administration

It's easy to overlook this item, but fortunately it is now relatively simple to handle registrations with online tools such as EventBrite. They take a cut if you charge for registration, but that's well worth it in my experience, in terms of saving a lot of grief with spreadsheets. If you are going for a conference dinner, then booking for this can be bundled in with registration fee.

In the days of Manchester CLS, email barely existed and nobody expected a conference website, but nowadays that is mandatory, and so you will need someone willing to set it up and populate it with information about venue and programme. As with my other advice, no need to make it fancy; just ensure there is the basic information that people need, with a link to a place for registration.

There are further items like setting up the Eventbrite page, making conference badges, and ensuring smooth communications with venue, speakers and restaurant. Here the main thing is to delegate responsibility so everyone knows what they have to do. I've quite often experienced the situation where I've agreed to speak at a meeting only to find that nobody has contacted me about the programme or venue and it's only a week to go.

On the day, you'll be glad of assistants who can do things like shepherding people into sessions, taking messages, etc. You can offer free registration to local students in return for them acting in this role.

Publicity

I've listed this under costs, but I've never spent on this for meetings I've organised, and given social media, I don't think you'll need to.

Grants

I've put optional for grants, as you can cover costs without a grant. But every bit of money helps and it's possible that one of the organisers will have funding that can be used. However, my advice is to check out options for grant funding from a society or other funder. National funding bodies such as UK research councils or NIH may have pots of money you can apply for: the sums are typically small and applying for the money is not onerous. Even if a society doesn't have a grants stream for meetings, they may be willing to sponsor places for specific categories of attendees: early-career people or those from resource-poor countries.

Local businesses or publishers are often willing to sponsor things like conference bags, in return for showing their logo. You can often charge publishers for a stand.

Registration

Once you have thought through the items under Expenditure, and have an idea of whether you'll have grant income, you will be in a good position to work out what you need to charge those attending to cover your costs. The ideal is to break even, but it's important not to overspend and so you should estimate how many people are likely to register in each category, and work out a registration fee that will cover this, even if numbers are disappointing.

What can go wrong?

  • Acts of God. I still remember a meeting at the Royal Society years ago where a hurricane swept across Britain overnight and around 50% of those attending couldn't make it. Other things like strikes, riots, etc. can happen, but I recommend you just accept these are risks not under your control.
  • Clash of dates. This is under your control to some extent. Before you settle on a date, ask around to check there isn't a clash with other meetings or with religious holidays.
  • Speaker pulls out. I have organised meetings where a speaker pulled out at the last minute – there will usually be a good reason for this such as illness. So long as it is one person, this can be managed, and may indeed provide an opportunity to do something useful with the time, such as holding a mini-Hackathon to brainstorm ideas about a specific problem..
  • You make a loss. This is a scary prospect but should not happen with adequate planning, as noted above. Main thing is to make sure you confirm what your speaker expenses will be so you don't get any nasty surprises at the last minute.
  • Difficult people. This is a minor one, but I remember wise words of Betty Byers Brown, a collaborator from those old Manchester days, who told me that 95% of the work of a conference organiser is caused by 5% of those attending. Just knowing that is the case makes it easier to deal with.
  • Unhappy people. People coming from far away who know nobody can have a miserable time at a conference, but with planning, you can help them integrate in a group. Rather than formal entertainment, consider having social activities that ensure everyone is included. Also, have an explicit anti-harassment policy – there are plenty of examples on the web.
  • Criticism. Whatever you do there will be people who complain – why didn't you do X rather than Y?  This can be demoralising if you have put a lot of work into organising something.  Nevertheless, make sure you do ask people for feedback after the meeting: if there are things that could be done better next time, you need to know about them. For what it's worth, the most common complaints I hear after meetings are that speakers go on too long and there is not enough time for questions and discussion. It's important to have firm chairing, and to set up the schedule to encourage interaction.

What can go right?

  • Running a conference carries an element of risk and stress, but it's an opportunity to develop organisational skills, and this can be a great thing to put on your CV. The skills you need to plan a conference are not so different from those to budget for a grant: you have to work out how to optimise the use of funds, anticipating expenses and risks.
  • Bonding with co-organisers. If you pick your co-organisers wisely, you may find that the experience of working together to solve problems is enjoyable and you learn a lot.
  • You can choose the topics for your meeting and get to invite the speakers you most want to hear. As a young researcher organising a small meeting, I got to know people I'd invited as speakers in a way that would not be possible if I was just attending a big meeting organised by a major society.
  • You can do it your way. You can decide if you want to lower costs for specific groups. You can make sure that the speakers are diverse, and can experiment with different approaches to get away from the traditional format of speakers delivering a lecture to an audience. For examples see this post and comments below it.
  • The main thing is that if you are in control, you can devise your meeting to ensure it achieves what scientific meetings are supposed to achieve: scholarly communication and interaction to spark ideas and collaborations. My memories of meetings I have organised as an early-career academic have been high points in my career, which is why I am so keen to encourage others to do this.

But! .... Climate emergency

The elephant in this particular room is air travel. Academics are used to zipping around the world to go to conferences, at a time when we are increasingly recognising the harm this is doing to our planet. My only justification for writing this post at the current time is that it may encourage people to go to smaller, more-focused meetings. But I'm trying to cut down on air travel substantially and in the longer term, suspect that we will need to move to virtual meetings.

Groups of younger researchers, and those from outside Europe and the UK, have a role to play in working out how to do this. I hope to encourage this by urging people to be bold and to venture outside the big conference arenas where junior people and those from marginalised groups can feel they are invisible. Organising a small meeting teaches you a lot of necessary skills that may be used in devising more radical formats. The future of conferences is going to change and you need to be shaping it.

-->

Monday, 15 April 2019

Review of 'Innate' by Kevin Mitchell


Innate: How the Wiring of Our Brains Shapes Who We Are.  Kevin J. Mitchell. Princeton, New Jersey, USA: Princeton University Press, 2018, 293 pages, hardcover. ISBN: 978-0-691-17388-7.

This is a preprint of a review written for the Journal of Mind and Behavior.

Most of us are perfectly comfortable hearing about biological bases of differences between species, but studies of biological bases of differences between people can make us uneasy. This can create difficulties for the scientist who wants to do research on the way genes influence neurodevelopment: if we identify genetic variants that account for individual differences in brain function, then it is may seem a small step to concluding that some people are inherently more valuable than others.  And indeed in 2018 we have seen calls for use of polygenic risk scores to select embryos for potential educational attainment (Parens et al, 2019). There has also been widespread condemnation of the first attempt to create a genetically modified baby using CRISPR technology (Normile, 2018), with the World Health Organization responding by setting up an advisory committee to develop global standards for governance of human genome editing (World Health Organization, 2019).
Kevin Mitchell's book Innate is essential reading for anyone concerned about the genetics behind these controversies. The author is a superb communicator, who explains complex ideas clearly without sacrificing accuracy. The text is devoid of hype and wishful thinking, and it confronts the ethical dilemmas raised by this research area head-on. I'll come back to those later, but will start by summarising Mitchell's take on where we are in our understanding of genetic influences on neurodevelopment.
Perhaps one of the biggest mistakes that we've made in the past is to teach elementary genetics with an exclusive focus on Mendelian inheritance. Mendel and his peas provided crucial insights into units of inheritance, allowing us to predict precisely the probabilities of different outcomes in offspring of parents through several generations.  The discovery of DNA provided a physical instantiation of the hitherto abstract gene, as well as providing insight into mechanisms of inheritance.  During the first half of the 20th century it became clear that there are human traits and diseases that obey Mendelian laws impeccably: blood groups, Huntington's disease, and cystic fibrosis, to name but a few. The problem is that many intelligent laypeople assume that this is how genetics works in general. If a condition is inherited, then the task is to track down the gene responsible.  And indeed, 40 years ago, many researchers took this view, and set out to track genes for autism, hearing loss, dyslexia and so on.  Ben Goldacre's (2014) comment 'I think you'll find it's a bit more complicated than that' was made in a rather different context, but is a very apt slogan to convey where genetics finds itself in 2019.  Here are some of the key messages that the author conveys, with clarity and concision, which provide essential background to any discussion of ethical implications of research.
1. Genes are not a blueprint
The same DNA does not lead to identical outcomes. We know this from the study of inbred animals, from identical human twins, and even from studying development of the two sides of the body in a single person. How can this be? DNA is a chemically inert material, which carries instructions for how to build a body from proteins in a sequence of bases. Shouldn't two organisms with identical DNA should turn out the same? The answer is no, because DNA can in effect be switched on and off: that's how it is possible for the same DNA to create a wide variety of different cell types, depending on which proteins are transcribed and when. As Mitchell puts it: "While DNA just kind of sits there, proteins are properly impressive – they do all sorts of things inside cells, acting like tiny molecular machines or robots, carrying out tens of thousands of different functions." DNA is chemically stable, but messenger RNA, which conveys the information to the cell where proteins are produced, is much less so. Individual cells transcribe messenger RNA in bursts. There is variability in this process, which can lead to differences in development.
2. Chance plays an important role in neurodevelopment
Consideration of how RNA functions leads to an important conclusion: factors affecting neurodevelopment can't just be divided into genetic vs. environmental influences: random fluctuations in the transcription process mean that chance also plays a role. 
Moving from the neurobiological level, Mitchell notes that the interpretation of twin studies tends to ignore the role of chance. When identical (monozygotic or MZ) twins grow up differently, this is often attributed to the effects of 'non-shared environment', implying there may have been some systematic differences in their experiences, either pre- or post-natal, that led them to differ. But, such effects don't need to be invoked to explain why identical twins can differ: this can arise because of random effects operating at a very early stage of neurodevelopment.
3. Small initial differences can lead to large variation in outcome
If chance is one factor overlooked in many accounts of genetics, development is the other. There are interactions between proteins, such that when messenger RNA from gene A reaches a certain level, this will increase expression of genes B and C.  Those genes in turn can affect others in a cascading sequence. This mechanism can amplify small initial differences to create much larger effects.
4. Genetic is not the same as heritable
Genetic variants that influence neurodevelopment can be transmitted in the DNA passed from parent to child leading to heritable disorders and traits.  But many genetically-based neurodevelopmental disorders do not work like this; rather, they are caused by 'de novo' mutations, i.e. changes to DNA that arise early in embryogenesis, and so are not shared with either parent.
5. We all have many mutations
The notion that there is a clear divide between 'normal people' with a nice pure genome and 'disordered' people with mutations is a fiction. All of us have numerous copy number variants (CNVs), chunks of DNA that are deleted or duplicated (Beckmann, Estivill, & Antonarakis, 2007), as well as point mutations, - i.e. changes in a single base pair of DNA. When the scale of mutation in 'normal' people was first discovered, it created quite a shock to the genetics community, jamming a spanner in the works for researchers trying to uncover causes of specific conditions.  If we find a rare CNV or point mutation in a person with a disorder, it could just be coincidence and not play any causal role. Converging evidence is needed. Studies of gene function can help establish causality; the impact on brain development will depend on whether a mutation affects key aspects of protein synthesis; but even so, there have been cases where a mutation thought to play a key role in disorder then pops up in someone whose development is entirely unremarkable. A cautionary tale is offered by Toma et al (2018), who studied variants in CNTNAP2, a gene that was thought to be related to autism and schizophrenia. They found that the burden of rare variants that disrupted gene function were just as high in individuals from the general population as in people with autism or schizophrenia.
6. One gene – one disorder is the exception rather than the rule
For many neurodevelopmental conditions, e.g. autism, intellectual disability, and epilepsy, associated mutations have been tracked down. But most of them account for only a small proportion of affected individuals, and furthermore, the same mutation is typically associated with different disorders.  Our diagnostic categories don't map well onto the genes.
This message is of particular interest to me, as I have been studying the impact of a major genetic change – presence of an extra X or Y chromosome - on children's development: this includes girls with an additional X chromosome ( trisomy X ), boys with an extra X (XXY or Klinefelter's syndrome) and boys with an extra Y (XYY constitution). The impact of an extra sex chromosome is far less than you might expect: most of these children attend mainstream school and live independently as adults. There has been much speculation about possible contrasting effects of an extra X versus extra Y chromosome. However, in general, one finds that variation within a particular trisomy group is far greater than variation between them. So, with all three types of trisomy, there is an increased likelihood that the child with have educational difficulties, language and attentional problems, and there's also a risk of social anxiety. In a minority of cases the child meets criteria for autism or intellectual disability (Wilson, King & Bishop, 2019). The range of outcomes is substantial – something that makes it difficult to advise parents when the trisomy is discovered. The story is similar for some other mutations: there are cases where a particular gene is described as an 'autism gene', only for later studies to find that individuals with the same mutation may have attention deficit hyperactivity disorder, epilepsy, language disorder, intellectual disability – or indeed, no diagnosis at all.  For instance, Niarchou et al (2019) published a study of a sample of children with deletion or duplication at a site on chromosome 16 (16p11.2), predicting that the deletion would be associated with autism, and duplication with autism or schizophrenia. In fact, they found that the commonest diagnosis with both conditions was attention deficit hyperactivity disorder, though rates of intellectual disability and autism were also increased. 52% of the cases with deletion and 37% of those with a duplication had no psychiatric diagnosis.
There are several ways in which such variation in outcomes might arise. First, the impact of a particular mutation may depend on the genetic background – for instance, if the person has another mutation affecting the same neural circuits, this 'double hit' may have a severe impact, whereas either mutation alone would be innocuous. A second possibility is that there may be environmental factors that affect outcomes. There is a lot of interest in this idea because it opens up potential for interventions. The third option, though, is the one that is often overlooked: the possibility that differences in outcomes are the consequence of random factors early in neurodevelopment, which then have cascading effects that amplify initial minor differences (see points 2 and 3).
6. A mutation may create general developmental instability
Many geneticists think of effects of mutations in terms of the functional impact on particular developmental processes. In the case of neurodevelopment, there is interest in how genes affect processes such as neuronal migration (movement of cells to their final position in the brain), synaptic connectivity (affecting communication between cells) or myelination (formation of white matter sheaths around nerve fibres).  Mitchell suggests, however, that mutations may have more general effects, simply making the brain less able to adapt to disruptive processes in development.  Many of us learn about genetics in the context of conditions like Huntington's disease, where a specific mutation leads to a recognisable syndrome. However, for many neurodevelopmental conditions, the impact of a mutation is to increase the variation in outcomes.  This makes sense of the observations outlined in point 5: a mutation can be associated with a range of developmental disabilities, but with different conditions in different people.
7. Sex differences in risk for neurodevelopmental disorders have genetic origins
There has been so much exaggeration and bad science in research on sex differences in the brain, that it has become popular to either deny their existence, or attribute them to sex differences in environmental experiences of males and females. Mitchell has no time for such arguments. There is ample evidence from animal studies that both genes and hormones affect neurodevelopment: why should humans be any different? But he adds two riders: first, although systematic sex differences can be found in human brains, they are small enough to be swamped by individual variation within each sex. So if you want to know about the brain of an individual, their sex would not tell you very much. And second, different does not mean inferior.
Mitchell argues that brain development is more variable in males than females and he cites evidence that, while average ability scores are similar for males and females, males show more variation and are overrepresented at the extremes of distributions of ability. The over-representation at the lower end has been recognised for many years and is at least partly explicable in terms of how the sex chromosomes operate. Many syndromes of intellectual disability are X-linked, which means they are caused by a mutation of large effect on the X chromosome. The mother of an affected boy often carries the same mutation but shows no impairment: this is because she has two X chromosomes, and the effect of a mutation on one of them is compensated for by the unaffected chromosome. The boy has XY chromosome constitution, with the Y being a small chromosome with few genes on it, and so the full impact of an X-linked mutation will be seen. Having said that, many conditions with a male preponderance, such as autism and developmental language disorder,  do not appear to involve X-linked genes, and some disorders, such as depression, are more common in females, so there is still much we need to explain. Mitchell's point is that we won't make progress in doing so by denying a role for sex chromosomes or hormones in neurodevelopment.  
Mitchell moves into much more controversial territory in describing studies showing over-representation of males at the other end of the ability distribution: e.g. in people with extraordinary skills in mathematics. That is much harder to account for in terms of his own account of genetic mechanisms, which questions the existence of genetic variants associated with high ability. I have not followed that literature closely enough to know how solid the evidence of male over-representation is, but assuming it is reliable, I'd like to see studies that looked more broadly at other aspects of cognition of males who had spectacular ability in domains such as maths or chess. The question is how to reconcile such findings with  Mitchell's position – which he summarises rather bluntly by saying there are no genes for intelligence, only genes for stupidity. He does suggest that greater developmental instability in males might lead to some cases of extremely high-functioning, but that is at odds with his general view that instability generally leads to deficits, not strengths. I'd be interested in studies of these exceptional high achievers to look at their skills across a wider range of domains. Is it really the case that males at the very top end of the IQ distribution are uniformly good at everything, or are there compensating deficits? It's easy to think of anecdotal examples of geniuses who were lacking in what we might term social intelligence, and whose ability to flourish was limited to a very restricted ecological niche in the groves of academe. Maybe these are people whose specific focus on certain topics would have been detrimental to reproductive fitness in our ancestors, but who can thrive in modern society where people are able to pursue exceptionally narrow interests.  If so, we can predict that at the point in the distribution where exceptional ability has a strong male bias, we should expect to find that the skill is highly specific and accompanied by limitations in other domains of cognition or behaviour.
8. It is difficult to distinguish polygenic effects from genetic heterogeneity
Way back in the early 1900s, there was criticism of Mendelian genetics because it maintained that genetic material was transmitted in quanta, and so it seemed not to be able to explain inheritance of continuous traits such as height, where the child's phenotype may be intermediate between those of parents. Reconciliation of these positions was achieved by Ronald Fisher, who showed that if a phenotype was influenced by the combined impact of many genes of small effect, we would expect correlations between related individuals in continuous traits. This polygenic view of inheritance is thought to apply to many common traits and disorders. If so, then the best way to discover genetic bases for disorder is not to hunt through the genome looking for rare mutations, but rather to search for common variants of small effect. The problem with that is that on the one hand it requires enormous samples to identify tiny effects, and on the other it's easy to find false positive associations. The method of Genome Wide Association has been developed to address these issues, and has had some success in identifying genetic variants that have little effect in isolation, but which in aggregate play a role in causing disorder.
Mitchell, however, has a rather different approach. At a time when most geneticists were embracing the idea that conditions such as schizophrenia and autism were the result of the combined effect of the tiny influence of numerous common genetic variants, Mitchell (2012) argued for another possibility - that we may be dealing with rare variants of large effect, which differ from family to family. In Innate, he suggests it is a mistake to reduce this to an either/or question: a person's polygenic background may establish a degree of risk for disorder, with specific mutations then determining how far that risk is manifest.
This is not just an academic debate: it has implications for how we invest in science, and for clinical applications of genetics. Genome-wide association studies need enormous samples, and collection, analysis and storage of data is expensive. There have been repeated criticisms that the yield of positive findings has been low and they have not given good value for money. In particular, it's been noted that the effects of individual genetic variants are minuscule, can only be detected in enormous samples, and throw little light on underlying mechanisms (Turkheimer, 2012, 2016). This has led to a sense of gloom that this line of work is unlikely to provide any explanations of disorder or improvements in treatment.
An approach that is currently in vogue is to derive a Polygenic Risk Score, which is based on all the genetic variants associated with a condition, weighted by the strength of association. This can give some probabilistic information about likelihood of a specific phenotype, but for cognitive and behavioural phenotypes, the level of prediction is not impressive.  The more data is obtained on enormous samples, the better the prediction becomes, and some scientists predict that Polygenic Risk Scores will become accurate enough to be used in personalised medicine or psychology. Others, though, have serious doubts.  A thoughtful account of the pros and cons of Polygenic Risk Scores is found in an interview that Ed Yong (2018) had with Daniel Benjamin, one of the authors of a recent study reporting on Polygenic Risk Scores for educational attainment (Lee et al, 2018). Benjamin suggested that predicting educational attainment from genes is a non-starter, because prediction for individuals is very weak. But he suggested that the research has value as we can use a Polygenic Risk Score as a covariate to control for genetic variation when studying the impact of environmental interventions. However, this depends on results generalising to other samples. It is noteworthy that when the Polygenic Risk Score for educational attainment was tested for its ability to explain within-family variation (in siblings), its predictive power dropped (Lee et al, 2018).
It is often argued that knowledge of genetic variants contributing to a Polygenic Risk Score will help identify the functions controlled by the relevant genes, which may lead to new discoveries in developmental neurobiology and drug design. However, others would question whether Polygenetic Risk Scores have the necessary biological specificity to fulfil this promise (Reimers et al, 2018). Furthermore, recent papers have raised concerns that population stratification means that Polygenetic Risk Scores may give misleading results: for instance, we might be able to find a group of SNPs predictive of 'chopsticks-eating skills', but this would just be based on genetic variants that happen to differ between ethnic groups that do and don't eat with chopsticks (Barton et al, 2019).
I think Mitchell would in any case regard the quest for Polygenic Risk Scores as a distraction from other more promising approaches that focus on finding rare variants of big effect. Rather than investing in analyses that require huge amounts of big data to detect marginal associations between phenotypes and SNPs, his view is that we will make most progress by studying the consequences of mutations. The tussle between these viewpoints is reflected in two articles that appeared at the end of 2017. Boyle, Li, and Pritchard (2017) queried some of the assumptions behind genome-wide association studies, and suggested that most progress will occur if we focus on detecting rare variants that may help understand the biological pathways involved in disorder. Wray et al (2017) countered by arguing that while exploring for de novo mutations is important for understanding severe childhood disorders, this approach is unlikely to be cost-effective when dealing with common diseases, where genome-wide associations with enormous samples is the optimal strategy. In fact,  the positions of these authors are not diametrically opposed: it is rather a question of which approach should be given most resources. The discussion involves more than just scientific disagreement: reputations and large amounts of research funding are at stake.
Ethical implications
And so we come to the ethical issues around modern genetics. I hope I have at least convinced readers that in order to have a rational analysis of moral questions in this field, one needs to move away from simplistic ideas of the genome as some kind of blueprint that determines brain structure and function. Ethical issues which are quite hard enough when things are deterministic are given a whole new layer of complexity when we realise that there's a large contribution of chance in most relationships between genes and neurodevelopment.
But let's start with the simpler and more straightforward case where you can reliably predict how a person will turn out from knowledge of their genetic constitution. There are then two problematic issues to grapple with: 1) if you have knowledge of genetic constitution prenatally, under what situations would you consider using the information to select an embryo or terminate a pregnancy? 2) if a person with a genetically-determined condition exists, should they be treated differently on the basis of that condition?  
Some religions bypass the first question altogether, by arguing that it is never acceptable to terminate a pregnancy. But, if we put absolutist positions to one side, I suspect most people would give a range of answers to question 1, depending on what the impact of the genetic condition is:  termination may be judged acceptable or even desirable if there are such severe impacts on the developing brain that the infant would be unlikely to survive into childhood, be in a great deal of distress or pain, or be severely mentally impaired. At the other extreme, terminating a pregnancy because a person lacks a Y chromosome seems highly unethical to many people, yet this practice is legal in some countries, and widely adopted even when it is not (Hvistendahl, 2011). These polarised scenarios may seem relatively straightforward, but there are numerous challenges because there will always be cases that fall between these extremes.
It is impossible to ignore the role of social factors in our judgements. Many hearing people are shocked when they discover that some Deaf parents want to use reproductive technologies to select for Deafness in their child (Mand et al., 2009), but those who wish to adopt such a practice argue that Deafness is a cultural difference rather than a disability.
Now let's add chance into the mix. Suppose you have a genetic condition that makes it more likely that a child will have learning difficulties or behaviour problems, but the range of outcomes is substantial; the typical outcome is mild educational difficulties, and many children do perfectly well.  This is exactly the dilemma facing parents of children who are found on prenatal screening to have an extra X or Y chromosome.  In many countries parents may be offered a termination of pregnancy in such cases, but it is clear that whether or not they decide to continue with the pregnancy depends on what they are told about potential outcomes (Jeon, Chen, & Goodson, 2012). 
Like Kevin Mitchell, I don't have easy solutions to such dilemmas, but like him, I think that we need to anticipate that such thorny ethical questions are likely to increase as our knowledge of genetics expands – with many if not most genetic influences being probabilistic rather than deterministic. The science fiction film Gattaca portrays a chilling vision of a world where genetic testing at birth is used to identify elite individuals who will have the opportunity to be astronauts, leaving those with less optimal alleles to do menial work – even though prediction is only probabilistic, and those with 'invalid' genomes may have desirable traits that were not screened for. The Gattaca vision is bleak not just because of the evident unfairness of using genetic screening to allocate resources to people, but because a world inhabited by a set of clones, selected for perfection on a handful of traits, could wipe out the diversity that makes us such a successful species.
There's another whole set of ethical issues that have to do with how we treat people who are known to have genetic differences. Suppose we find that someone standing trial has a genetic mutation that is known to be associated with aggressive outbursts. Should this genetic information be used in mitigation for criminal behaviour? Some might say this would be tantamount to letting a criminal get away with antisocial behaviour, whereas others may regard it as unethical to withhold this information from the court. The problem, again, becomes particularly thorny because association between genetic variation and aggression is always probabilistic.  Is someone with a genetic variant that confers a 50% increase in risk of aggression less guilty than someone with a different variant that makes then 50% less likely to be aggressive? Of course, it could be argued that the most reliable genetic predictor of criminality is having a Y chromosome, but we do not therefore treat male criminals more leniently than females.  Rather, we recognise that genetic constitution is but one aspect of an individual's make-up, and that factors that lead a person to commit a crime go far beyond their DNA sequence.
As we gain ever more knowledge of genetics, the ethical challenges raised by our ability to detect and manipulate genetic variation need to be confronted. To do that we need an up-to-date and nuanced understanding of the ways in which genes influence neurodevelopment and ultimately affect behaviour. Innate provides exactly that.
Acknowledgement
I thank David Didau for comments on a draft version of this review, and in particular for introducing me to Gattaca.
References
Barton, N., Hermisson, J., & Nordborg, M. (2019). Population genetics: Why structure matters. eLife, 8, e45380. doi:10.7554/eLife.45380
Beckmann, J. S., Estivill, X., & Antonarakis, S. E. (2007). Copy number variants and genetic traits: closer to the resolution of phenotypic to genotypic variability. Nature Reviews Genetics, 8(8), 639-646.
Boyle, E. A., Yang, I. L., & Pritchard, J. K. (2017). An expanded view of complex traits: From polygenic to omnigenic. Cell, 169(7), 1177-1186.

Goldacre, B. (2014). I think you'll find it's a bit more complicated than that. London, UK: Harper Collins.
Hvistendahl, M. (2011). Unnatural Selection: Choosing Boys Over Girls, and the Consequences of a World Full of Men. New York: Public Affairs.
Jeon, K. C., Chen, L.-S., & Goodson, P. (2012). Decision to abort after a prenatal diagnosis of sex chromosome abnormality: a systematic review of the literature. Genetics in Medicine, 14, 27-38.
Mand, C., Duncan, R. E., Gillam, L., Collins, V., & Delatycki, M. B. (2009). Genetic selection for deafness: the views of hearing children of deaf adults. Journal of Medical Ethics, 35(12), 722-728. doi:http://dx.doi.org/10.1136/jme.2009.030429
Mitchell, K. J. (2012). What is complex about complex disorders? Genome Biology, 13, 237.
Niarchou, M., Chawner, S. J. R. A., Doherty, J. L., Maillard, A. M., Jacquemont, S., Chung, W. K., . . . van der Bree, M. B. M. (2019). Psychiatric disorders in children with 16p11.2 deletion and duplication. Translational Psychiatry 9(8). doi:10.1038/s41398-018-0339-8
Normile, D. (2018). Shock greets claim of CRISPR-edited babies. Science, 362(6418), 978-979. doi:10.1126/science.362.6418.978
Parens, E., Appelbaum, P., & Chung, W. (2019). Embryo editing for higher IQ is a fantasy. Embryo profiling for it is almost here. Stat+(Feb 12 2019).
Reimers, M. A., Craver, C., Dozmorov, M., Bacanu, S. A., & Kendler, K. S. (2018). The coherence problem: Finding meaning in GWAS complexity. Behavior Genetics. doi:https://doi.org/10.1007/s10519-018-9935-x
Toma, C., Pierce, K. D., Shaw, A. D., Heath, A., Mitchell, P. B., Schofield, P. R., & Fullerton, J. M. (2018). Comprehensive cross-disorder analyses of CNTNAP2 suggest it is unlikely to be a primary risk gene for psychiatric disorders. Bioarxiv. doi:https://doi.org/10.1101/363846
Turkheimer, E. (2012). Genome Wide Association Studies of behavior are social science. In K. S. Plaisance & T. A. C. Reydon (Eds.), Philosophy of Behavioral Biology, 43 Boston Studies in the Philosophy of Science 282, DOI 10.1007/978-94-007-1951-4_3, (pp. 43-64): Springer Science+Business Media.
Turkheimer, E. (2016). Weak genetic explanation 20 years later: Reply to Plomin et al (2016). Perspectives on Psychological Science, 11(1), 24-28. doi:10.1177/1745691615617442
World Health Organization (2019). WHO establishing expert panel to develop global standards for governance and oversight of human genome editing. https://www.who.int/ethics/topics/human-genome-editing/en/.
Wray, N. R., Wijmenga, C., Sullivan, P. F., Yang, J., & Visscher, P. M. (2018). Common disease Is more complex than implied by the core gene omnigenic model. Cell, 173, 1573-1590. doi:10.1016/j.cell.2018.05.051
Yong, E. (2018). An enormous study of the genes related to staying in school. The Atlantic. https://www.theatlantic.com/science/archive/2018/07/staying-in-school.../565832/