Wednesday, 9 October 2019

Attempting to communicate with the BBC: it's not just politicians who refuse to answer questions

A couple of weeks ago there was an outburst of public indignation after it emerged that the BBC had censured their presenter Naga Munchetty. As reported by the Independent, in July BBC Breakfast reported on comments made by President Trump to four US congresswomen, none of whom was white, whom he told to "go back and help fix the totally broken and crime-infested places from which they came." Naga commented "Every time I've been told as a woman of colour to 'go home', to 'go back to where I've come from', that was embedded in racism."

Most of the commentary at the time focused on whether or not Naga had behaved unprofessionally in making the comment, or whether she was justified in describing Trump's comment as racist. The public outcry has been heard: the Director General of the BBC, Tony Hall, has since overturned the decision to censure her.

There is, however, another concern about the BBC's action, which is why did they choose to act on this matter in the first place. All accounts of the story talk of 'a complaint'. The BBC complaints website explains that they can get as many as 200,000 complaints every year, which averages out at 547 a day. Now, I would have thought that they might have some guidelines in place about which complaints to act upon. In particular, they would be expected to take most seriously issues about which there were a large number of complaints. So it seems curious, to say the least, if they had decided to act on a single complaint, and I started wondering whether it had been made by someone with political clout.

The complaints website allows you to submit a complaint or to make a comment, but not to ask a question, but I submitted some questions anyhow through the complaints portal, and this morning I received a response, which I append in full below. Here are my questions and the answers:

Q1. Was there really just ONE complaint?
BBC: Ignored

Q2: If yes, how often does the BBC complaints department act on a SINGLE complaint?
BBC: Ignored

Q3: Who made the complaint?
BBC: We appreciate you would like specific information about the audience member who complained about Naga's comments but we can't disclose details of the complainant, but any viewer or listener can make a complaint and pursue it through the BBC's Complaints framework.

Q4: If you cannot disclose identity of the complainant, can you confirm whether it was anyone in public life?
BBC: Ignored

Q5: Can you reassure me that any action against Munchetty was not made because of any political pressure on the BBC?
BBC: Ignored

I guess the BBC are so used to politicians not answering questions that they feel it is acceptable behaviour. I don't, and I treat evasion as evidence of hiding something they don't want us to hear. I was interested to see that Ofcom is on the case, but have been fobbed off just as I was. Let's keep digging. I smell a large and ugly rat.

Here is the full text of the response:

Dear Prof Bishop
Reference CAS-5652646-TNKKXL
Thank you for contacting the BBC.
I understand you have concerns about the BBC Complaints process specifically with regard to a complaint made regarding Breakfast presenter Naga Munchetty and comments about US President Trump. 
Naturally we regret when any member of our audience is unhappy with any aspect of what we do. We treat all complaints seriously, but what matters is whether the complaint is justified and the BBC acted wrongly. If so we apologise. If we don’t agree that our standards or public service obligations were breached, we try to explain why. We appreciate you would like specific information about the audience member who complained about Naga's comments but we can't disclose details of the complainant, but any viewer or listener can make a complaint and pursue it through the BBC's Complaints framework.
Nonetheless, I understand this is something you feel strongly about and I’ve included your points on our audience feedback report that is sent to senior management each morning and ensures that your complaint has been seen by the right people quickly. 
We appreciate you taking the time to register your views on this matter as it is greatly helpful in informing future decisions at the BBC.
Thanks again for getting in touch.
Kind regards
John Hamill
BBC Complaints Team

Tuesday, 10 September 2019

Responding to the replication crisis: reflections on Metascience2019

Talk by Yang Yang at Metascience 2019
I'm just back from MetaScience 2019. It was an immense privilege to be invited to speak at such a stimulating and timely meeting, and I would like to thank the Fetzer Franklin Fund, who not only generously funded the meeting, but also ensured the impressively smooth running of a packed schedule. The organisers - Brian Nosek, Jonathan Schooler, Jon Krosnick, Leif Nelson and Jan Walleczek - did a great job in bringing together speakers on a range of topics, and the quality of talks was outstanding. For me, highlights were hearing great presentations from people well outside my usual orbit, such as Melissa Schilling on 'Where do breakthrough ideas come from?', Carl Bergstrom on modelling grant funding systems, and Callin O'Connor on scientific polarisation.

The talks were recorded, but I gather it may be some months before the film is available. Meanwhile, slides of many of the presentations are available here, and there is a copious Twitter stream on the hashtag #metascience2019. Special thanks are due to Joseph Fridman (@joseph_fridman): if you look at his timeline, you can pretty well reconstruct the entire meeting from live tweets. Noah Haber (@NoahHaber) also deserves special mention for extensive commentary, including a post-conference reflection starting here.  It is a sign of a successful meeting, I think, if it gets people, like Noah, raising more general questions about the direction the field is going in, and it is in that spirit I would like to share some of my own thoughts.

In the past 15 years or so, we have made enormous progress in documenting problems with credibility of research findings, not just in psychology, but in many areas of science. Metascience studies have helped us quantify the extent of the problem and begun to shed light on the underlying causes. We now have to confront the question of what we do next. That would seem to be a no-brainer: we need to concentrate on fixing the problem. But there is a real danger of rushing in with well-intentioned solutions that may be ineffective at best or have unintended consequences at worst.

One question is whether we should be continuing with a focus on replication studies. Noah Haber was critical of the number of talks that focused on replication, but I had a rather different take on this: it depends on what the purpose of a replication study is. I think further replication initiatives, in the style of the original Reproducibility Project, can be invaluable in highlighting problems (or not) in a field. Tim Errington's talk about the Cancer Biology Reproducibility Project demonstrated beautifully how a systematic attempt to replicate findings can reveal major problems in a field. Studies in this area are often dependent on specialised procedures and materials, which are either poorly described or unavailable. In such circumstances it becomes impossible for other labs to reproduce the methods, let alone replicate the results. The mindset of many researchers in this area is also unhelpful – the sense is that competition dominates, and open science ideals are not part of the training of scientists. But these are problems that can be fixed.

As was evident from my questions after the talk, I was less enthused by the idea of doing a large, replication of Darryl Bem's studies on extra-sensory perception. Zoltán Kekecs and his team have put in a huge amount of work to ensure that this study meets the highest standards of rigour, and it is a model of collaborative planning, ensuring input into the research questions and design from those with very different prior beliefs. I just wondered what the point was. If you want to put in all that time, money and effort, wouldn't it be better to investigate a hypothesis about something that doesn't contradict the laws of physics? There were two responses to this. Zoltán's view was that the study would tell us more than whether or not precognition exists: it would provide a model of methods that could be extended to other questions. That seems reasonable: some of the innovations, in terms of automated methods and collaborative working could be applied in other contexts to ensure original research was done to the highest standards. Jonathan Schooler, on the other hand, felt it was unscientific of me to prejudge the question, given a large previous literature of positive findings on ESP, including a meta-analysis. Given that I come from a field where there are numerous phenomena that have been debunked after years of apparent positive evidence, I was not swayed by this argument. (See for instance this blogpost on 5-HTTLPR and depression). If the study by Kekecs et al sets such a high standard that the results will be treated as definitive, then I guess it might be worthwhile. But somehow I doubt that a null finding in this study will convince believers to abandon this line of work.

Another major concern I had was the widespread reliance on proxy indicators of research quality. One talk that exemplified this was Yang Yang's presentation on machine intelligence approaches to predicting replicability of studies. He started by noting that non-replicable results get cited just as much as replicable ones: a depressing finding indeed, and one that motivated the study he reported. His talk was clever at many levels. It was ingenious to use the existing results from the Reproducibility Project as a database that could be mined to identify characteristics of results that replicated. I'm not qualified to comment on the machine learning approach, which involved using ngrams extracted from texts to predict a binary category of replicable or not. But implicit in this study was the idea that the results from this exercise could be useful in future in helping us identify, just on the basis of textual analysis, which studies were likely to be replicable.

Now, this seems misguided on several levels. For a start, as we know from the field of medical screening, the usefulness of a screening test depends on the base rate of the condition you are screening for, the extent to which the sample you develop the test on is representative of the population, and the accuracy of prediction. I would be frankly amazed if the results of this exercise yielded a useful screener. But even if they did, then Goodhart's law would kick in: as soon as researchers became aware that there was a formula being used to predict how replicable their research was, they'd write their papers in a way that would maximise their score. One can even imagine whole new companies springing up who would take your low-scoring research paper and, for a price, revise it to get a better score. I somehow don't think this would benefit science. In defence of this approach, it was argued that it would allow us to identify characteristics of replicable work, and encourage people to emulate these. But this seems back-to-front logic. Why try to optimise an indirect, weak proxy for what makes good science (ngram characteristics of the write-up) rather than optimising, erm, good scientific practices. Recommended readings in this area include Philip Stark's short piece on Preproducibility, as well as Florian Markowetz's 'Five selfish reasons to work reproducibly'.

My reservations here are an extension of broader concerns about reliance on text-mining in meta-science (see e.g. https://peerj.com/articles/1715/https://peerj.com/articles/1715/). We have this wonderful ability to pull in mountains of data from online literature to see patterns that might be undetectable otherwise, But ultimately, the information that we extract cannot give more than a superficial sense of the content. It seems sometimes that we're moving to a situation where science will be done by bots, leaving the human brain out of the process altogether. This would, to my mind, be a mistake.









Sunday, 8 September 2019

Voting in the EU Referendum: Ignorance, deceit and folly

As a Remainer, I am baffled as to what Brexiteers want. If you ask them, as I sometimes do on Twitter, they mostly give you slogans such as "Taking Back Control". I'm more interested in specifics, i.e. what things do people think will be better for them if we leave. It is clear that things that matter to me – the economy, the health service, scientific research, my own freedom of movement in Europe - will be damaged by Brexit. I would put up with that if there was some compensating factor that would benefit other people, but I'm not convinced there is. In fact, all the indications are that people who voted to leave will suffer the negatives of Brexit just as much as those who voted to remain.

But are people who want to leave really so illogical? Brexit and its complexities is a long way from my expertise. I've tried to educate myself so that I can understand the debates about different options, but I'm aware that, despite being highly educated, I don't know much about the EU. I recently decided that, as someone who is interested in evidence, I should take a look at some of the surveys on this topic. We all suffer from confirmation bias, the tendency to seek out, process and remember just that information that agrees with our preconceptions, so I wanted to approach this as dispassionately as I could. The UK in a Changing Europe project is a useful starting place. They are funded by the Economic and Social Research Council, and appear to be even-handed. I have barely begun to scratch the surface of the content of their website, but I found their report Brexit and Public Opinion 2019 provides a useful and readable summary of recent academic research.

One paper summarised in the Brexit and Public Opinion 2019 report caught my attention in particular. Carl, Richards and Heath (2019) reported results from a survey of over 3000 people selected to be broadly representative of the British population, who were asked 15 questions about the EU. Overall, there was barely any difference between Leave and Remain voters in the accuracy of answers. The authors noted that results counteracted a common belief, put forward by some prominent commentators – that Leave voters had, on average, a weaker understanding of what they voted for than Remain voters. Interestingly, Carl et al did confirm, as other surveys had done, that those voting Leave were less well-educated than Remain voters, and indeed, in their study, the Leaver voters did less well on a test of probabilistic reasoning. But this was largely unrelated to their responses to the EU survey. The one factor that did differentiate Leave and Remain voters was how they responded to a subset of questions that were deemed 'ideologically convenient' for their position: I have replotted the data below*. As an aside, I 'm not entirely convinced by the categorisation of certain items as ideologically convenient - shown in the figure with £ and € symbols  - but that is a minor point.
Responses to survey items from Carl et al (2019) Table 1.  Items marked £ were regarded as ideologically convenient for Brexit voters; those marked € as convenient for Remain voters
I took a rather different message away from the survey, however. I have to start by saying that I was rather disappointed when I read the survey items, because they didn't focus on implications of EU membership for individuals. I would have liked to see items probing knowledge of how leaving the EU might affect trade, immigration and travel, and relations between England and the rest of the UK.The  survey questions rather tested factual knowledge about the EU, which could be scored using a simple Yes/No response format. It would have been perhaps more relevant, when seeking evidence for validity of the referendum, to assess how accurately people estimated the costs and benefits of EU membership.

With that caveat, the most striking thing to me was how poorly people did on the survey, regardless of whether they voted Leave or Remain. There were 15 two-choice questions. If people were just guessing at random, they would be expected to score on average 7.5, with 95% of people scoring between 4 and 11.  Carl et al plotted the distribution of scores (Figure 2) and noted that the average score was only 8.8, not much higher than what would be expected if people were just guessing. Only 11.2% of Leave voters and 13.1% of Remain voters scored 12 or more. However, the item-level responses indicate that people weren't just guessing, because there were systematic differences from item to item. On some items, people did better than chance. But, as Carl et al noted, there were four items where people performed below chance. Three of these items had been designated as "ideologically convenient" for the Remain position, and one as convenient for the Leave position.

Figure 1 from Carl et al (2019). Distributions of observed scores and scores expected under guessing.

Carl et al cited a book by Jason Brennan, Against Democracy, which argues that "political decisions are presumed to be unjust if they are made incompetently, or in bad faith, or by a generally incompetent decision-making body". I haven't read the book yet, but that seems a reasonable point.

However, having introduced us to Brennan's argument, Carl et al explained: "Although our study did not seek to determine whether voters overall were sufficiently well informed to satisfy Brennan's (2016) ‘competence principle’, it did seek to determine whether there was a significant disparity in knowledge between Leave and Remain voters, something which––if present––could also be considered grounds for questioning the legitimacy of the referendum result."

My view is that, while Carl et al may not have set out to test the competence principle, their study nevertheless provided evidence highly relevant to the principle, evidence that challenges the validity of the referendum. If one accepts the EU questionnaire as an indicator of competence, then both Leave and Remain voters are severely lacking. Not only do they show a woeful ignorance of the EU, they also, in some respects show evidence of systematic misunderstanding. 72% of Leave voters and 50% of Remain voters endorsed the statement that "More than ten per cent of British government spending goes to the EU." (Item M in Figure 1).  According to the Europa.eu website, the correct figure is 0.28%.  So the majority of people think that we send the EU at least 36 times more money than is the case. The lack of overall difference between Leave and Remain voters is of interest, but the levels of ignorance or systematic misunderstanding on key issues is striking in both groups. I don't exclude myself from this generalisation: I scored only 10 out of 15 in the survey, and there were some lucky guesses among my answers.

I have previously made a suggestion that seems in line with Jason Brennan's ideas – that if we were to have another referendum, people should have first to pass a simple quiz to demonstrate that they have a basic understanding of what they are voting for. The results of Carl et al suggest, however, that this would disenfranchise most of the population. Given how ignorant we are about the EU, it does seem remarkable that we are now in a position where we have a deeply polarised population, with people self-identifying as Brexit or Remain voters more strongly than they identify with political parties (Evans & Shaffner, 2019).

*I would like to thank Lindsay Richards for making the raw data available to me, in a very clear and well-documented format. 

References

Carl, N., Richards, L., & Heath, A. (2019). Leave and Remain voters’ knowledge of the EU after the referendum of 2016. Electoral Studies, 57, 90-98. doi:https://doi.org/10.1016/j.electstud.2018.11.003

Evans, G. & Schaffner, F. (2019). Brexit identity vs party identity. In A. Menon (Ed). Brexit and public opinion 2019.

Saturday, 10 August 2019

A day out at 10 Downing Street

Yesterday, I attended a meeting at 10, Downing Street with Dominic Cummings, special advisor to Boris Johnson, for a discussion about science funding. I suspect my invitation will be regarded, in hindsight, as a mistake, and I hope some hapless civil servant does not get into trouble over it. I discovered that I was on the invitation list because of a recommendation by the eminent mathematician, Tim Gowers, who is someone who is venerated by Cummings. Tim wasn't able to attend the meeting, but apparently he is a fan of my blog, and we have bonded over a shared dislike of the evil empire of Elsevier.  I had heard that Cummings liked bold, new ideas, and I thought that I might be able to contribute something, given that science funding is something I have blogged about. 

The invitation came on Tuesday and, having confirmed that it was not a spoof, I spent some time reading Cummings' blog, to get a better idea of where he was coming from. The impression is that he is besotted with science, especially maths and technology, and impatient with bureaucracy. That seemed promising common ground.

The problem, though, is that as a major facilitator of Brexit in 2016, who is now persisting with the idea that Brexit must be achieved "at any cost", he is doing immense damage, because science transcends national boundaries. Don't just take my word for it: it's a message that has been stressed by the President of the Royal Society, the Government's Chief Scientific Advisor, the Chair of the Wellcome Trust, the President of the Academy of Medical Sciences, and the Director of the Crick Institute, among others. 

The day before the meeting, I received an email to say that the topic of discussion would be much narrower than I had been led to believe. The other invitees were four Professors of Mathematics and the Director of the Engineering and Physical Sciences Research Council. We were sent a discussion document written by one of the professors outlining a wish list for improvements in funding for academic mathematics in the UK. I wasn't sure if I was a token woman: I suspect Cummings doesn't go in for token women and that my invite was simply because it had been assumed that someone recommended by Gowers would be a mathematician. I should add that my comments here are in a personal capacity and my views should not be taken as representing those of the University of Oxford.

The meeting started, rather as expected, with Cummings saying that we would not be talking about Brexit, because "everyone has different views about Brexit" and it would not be helpful. My suspicion was that everyone around the table other than Cummings had very similar views about Brexit, but I could see that we'd not get anywhere arguing the point. So we started off feeling rather like a patient who visits a doctor for medical advice, only to be told "I know I just cut off your leg, but let's not mention that."

The meeting proceeded in a cordial fashion, with Cummings expressing his strong desire to foster mathematics in British universities, and asking the mathematicians to come up with their "dream scenario" for dramatically enhancing the international standing of their discipline over the next few years. As one might expect, more funding for researchers at all levels, longer duration of funding, plus less bureaucracy around applying for funding were the basic themes, though Brexit-related issues did keep leaking in to the conversation – everyone was concerned about difficulties of attracting and retaining overseas talent, and about loss of international collaborations funded by the European Research Council. Cummings was clearly proud of the announcement on Thursday evening about easing of visa restrictions on overseas scientists, which has potential to go some way towards mitigating some of the problems created by Brexit. I felt, however, that he did not grasp the extent to which scientific research is an international activity, and breakthroughs depend on teams with complementary skills and perspectives, rather than the occasional "lone genius".  It's not just about attracting "the very best minds from around the world" to come and work here.

Overall, I found the meeting frustrating. First, I felt that Cummings was aware that there was a conflict between his twin aims of pursuit of Brexit and promotion of science, but he seemed to think this could be fixed by increasing funding and cutting regulation. I also wonder where on earth the money is coming from. Cummings made it clear that any proposals would need Treasury approval, but he encouraged the mathematicians to be ambitious, and talked as if anything was possible. In a week when we learn the economy is shrinking for the first time in years, it's hard to believe he has found the forest of magic money trees that are needed to cover recent spending announcements, let alone additional funding for maths.

Second, given Cummings' reputation, I had expected a far more wide-ranging discussion of different funding approaches. I fully support increased funding for fundamental mathematics, and did not want to cut across that discussion, so I didn't say much. I had, however, expected a bit more evidence of creativity. In his blog, Cummings refers to the Defense Advanced Research Projects Agency (DARPA), which is widely admired as a model for how to foster innovation. DARPA was set up in 1958 with the goal of giving the US superiority in military and other technologies. It combined blue-skies and problem-oriented research, and was immensely successful, leading to the development of the internet, among other things. In his preamble, Cummings briefly mentioned DARPA as a useful model. Yet, our discussion was entirely about capacity-building within existing structures.

Third, no mention was made of problem-oriented funding. Many scientists dislike having governments control what they work on, and indeed, blue-skies research often generates quite unexpected and beneficial outcomes. But we are in a world with urgent problems that would benefit from focussed attention of an interdisciplinary, and dare I say it, international group of talented scientists. In the past, it has taken world wars to force scientists to band together to find solutions to immediate threats. The rapid changes in the Arctic suggest that the climate emergency should be treated just like a war - a challenge to be tackled without delay. We should be deploying scientists, including mathematicians, to explore every avenue to mitigating the effects of global heating – physical and social – right now. Although there is interesting research on solar geoengineering going on at Harvard, it is clear that, under the Trump administration, we aren't going to see serious investment from the USA in tackling global heating. And, in any case, a global problem as complex as climate needs a multi-pronged solution. The economist Marianna Mazzucato understands this: her proposals for mission-oriented research take a different approach to the conventional funding agencies we have in the UK. Yet when I asked whether climate research was a priority in his planning, Cummings replied "it's not up to me". He said that there were lots of people pushing for more funding for research on "climate change or whatever", but he gave the impression that it was not something he would give priority to, and he did not display a sense of urgency. That's surprising in someone who is scientifically literate and has a child.

In sum, it's great that we have a special advisor who is committed to science. I'm very happy to see mathematics as a priority funding area. But I fear Dominic Cummings overestimates the extent to which he can mitigate the negative consequences of Brexit, and it is particularly unfortunate that his priorities do not include the climate emergency that is unfolding.

Saturday, 3 August 2019

Corrigendum: a word you may hope never to encounter


I have this week submitted a 'corrigendum' to a journal for an article published in the American Journal of Medical Genetics B (Bishop et al, 2006). It's just a fancy word for 'correction', and journals use it contrastively with 'erratum'. Basically, if the journal messes up and prints something wrong, it's an erratum. If the author is responsible for the mistake, it's a corrigendum.

 I'm trying to remember how many corrigenda I've written over the 40 odd years I've been publishing: there have been at least three previous cases that I can remember, but there could be more. I think this one was the worst; previous errors have tended to just affect numbers in a minor way. In this case, a whole table of numbers (table II) was thrown out, and although the main findings were upheld, there were some changes in the details.

I discovered the error when someone asked for the data for a meta-analysis. I was initially worried I would not be able to find the files, but fortunately, I had archived the dataset on a server, and eventually tracked it down. But it was not well-documented, and I then had the task of trawling through a number of cryptically-named files to try and work out which one was the basis for the data in the paper. My brain slowly reconstructed what the variable names meant and I got to the point of thinking I'd better check that this was the correct dataset by rerunning the analysis. Alas, although I could recreate most of what was published, I had the chilling realisation that there was a problem with Table II.

Table II was the one place in the analysis where, in trying to avoid one problem with the data (non-independence), I created a whole new problem (wrong numbers). I had data on siblings of children with autism, and in some cases there were two or three siblings in the family. These days I would have considered using a multilevel model to take family structure into account, but in 2005 I didn't know how to do that, and instead I decided to take a mean value for each family. So if there was one child, I used their score, but if there were 2 or 3, then I averaged them. The N was then the number of families, not the number of children.

And here, dear Reader, is where I made a fatal mistake. I thought the simplest way to do this would be by creating a new column in my Excel spreadsheet which had the mean for each family, computing this by manually entering a formula based on the row numbers for the siblings in that family. The number of families was small enough for this to be feasible, and all seemed well. However, I noticed when I opened the file that I had pasted a comment in red on the top row that said 'DO NOT SORT THIS FILE!'. Clearly, I had already run into problems with my method, which would be totally messed up if the rows were reordered. Despite my warning message to myself, somewhere along the line, it seems that a change was made to the numbering, and this meant that a few children had been assigned to the wrong family. And that's why table II had gremlins in it and needed correcting.

I now know that doing computations in Excel is almost always a bad idea, but in those days, I was innocent enough to be impressed with its computational possibilities. Now I use R, and life is transformed. The problem of computing a mean for each family can be scripted pretty easily, and then you have a lasting record of the analysis, which can be reproduced at any time. In my current projects, I aim to store data with a data dictionary and scripts on a repository such as Open Science Framework, with a link in the paper, so anyone can reconstruct the analysis, and I can find it easily if someone asks for the data. I wish I had learned about this years ago, but at least I can now use this approach with any new data – and I also aim to archive some old datasets as well.

For a journal, a corrigendum is a nuisance: they cost time and money in production costs, and are usually pretty hard to link up to the original article, so it may be seen as all a bit pointless. This is especially so given that a corrigendum is only appropriate if the error is not major. If an error would alter the conclusions that you'd draw from the data, then the paper will need to retracted. Nevertheless, it is important for the scientific record to be accurate, and I'm pleased to say that the American Journal of Medical Genetics took this seriously. They responded promptly to my email documenting the problem, suggesting I write a corrigendum, which I have now done.

I thought it worth blogging about this to show how much easier my life would have been if I had been using the practices of data management and analysis that I now am starting to adopt. I also felt it does no harm to write about making mistakes, which is usually a taboo subject. I've argued previously that we should be open about errors, to encourage others to report them, and to demonstrate how everyone makes mistakes, even when trying hard to be accurate (Bishop, 2018). So yes, mistakes happen, but you do learn from them.

References 
Bishop, D. V. M. (2018). Fallibility in science: Responding to errors in the work of oneself and others (Commentary). Advances in Methods and Practices in Psychological Science, 1(3), 432-438 doi:10.1177/2515245918776632. (For free preprint see: https://peerj.com/preprints/3486/)

Bishop, D. V. M., Maybery, M., Wong, D., Maley, A., & Hallmayer, J. (2006). Characteristics of the broader phenotype in autism: a study of siblings using the Children's Communication Checklist - 2. American Journal of Medical Genetics Part B (Neuropsychiatric Genetics), 141B, 117-122.

Saturday, 20 July 2019

A call for funders to ban institutions that use grant capture targets

I  caused unease on Twitter this week when I criticised a piece in the Times Higher Education on 'How to win a research grant'. As I explained in a series of tweets, I have no objection to experienced grant-holders sharing their pearls of wisdom with other academics: indeed, I've given my own tips in the past. My objection was to the sentiment behind the lede beneath the headline: "Even in disciplines in which research is inherently inexpensive, ‘grant capture’ is increasingly being adopted as a metric to judge academics and universities. But with success rates typically little better than one in five, rejection is the fate of most applications." I made the observation that it might have been better if the Times Higher had noted that grant capture is a stupid way to evaluate academics.

Science is in trouble when the getting of grant funding is seen as an end in itself rather than a means to the end of doing good research, with researchers rewarded in proportion to how much money they bring in. I've rehearsed the arguments for this view more than once on my blog (see, e.g. here); many of these points were anticipated by Raphael Gillett in 1991, long before 'grant capture' became widespread as an explicit management tool. Although my view is shared by some other senior figures (see, e.g., this piece by John Ioannidis), it is seldom voiced. When I suggested that the best approach to seeking funding was to wait until you had a great idea that you were itching to implement, the patience of my followers snapped. It was clear that to many people working in academia, this view is seen as naive and unrealistic. Quite simply, it's a case of get funded or get fired. When I started out, use of funding success may have been used informally to rate academics, but now it is often explicit, sometimes to the point whereby expected grant income targets are specified.

Encouraging more and more grant submissions is toxic, both for researchers and for science, but everyone feels trapped. So how could we escape from this fix?

I think the solution has to be down to funders. They should be motivated to tackle the problem for several reasons.
  • First, they are inundated with far more proposals than they can fund - to the extent that many of them use methods of "demand management" to stem the tide. 
  • Second, if people are pressurised into coming up with research projects in order to become or remain employed, this is not likely to lead to particularly good research. We might expect quality of proposals to improve if people are encouraged to take time to develop and hone a great idea.
  • Third, although peer review of grants is generally thought to be the best among various unsatisfactory options for selecting grants, it is known to have poor reliability, and there is an element of lottery as to who gets funded. There's a real risk that, with grant capture being used as a metric, many researchers are being lost from the system because they were unlucky rather than untalented. 
  • Fourth, if people are evaluated in terms of the amount of funding they acquire, they will be motivated to make their proposals as expensive as possible: this cannot be in the interests of the funders.
Funders have considerable power in their hands and they can use it to change the culture. This was neatly demonstrated when the Athena SWAN charter started up, originally focused on improving gender equality in STEMM subjects. Institutions paid lip service to it, but there was little action until the Chief Medical Officer, Sally Davies, declared that to be eligible for biomedical funding from NIHR, institutions would have to have a Silver Athena SWAN award.  This raising of the stakes concentrated the minds of Vice Chancellors to an impressive degree.

My suggestion is that major funders such as Research EnglandWellcome Trust and Cancer Research UK could at a stroke improve research culture in the UK by implementing a rule whereby any institution that used grant capture as a criterion for hiring, firing or promotion would be ineligible to host grants. 

Reference
Gillett, R. (1991). Pitfalls in assessing research performance by grant income. Scientometrics, 22(2), 253-263.

Wednesday, 12 June 2019

Bishopblog catalogue (updated 12 June 2019)

Source: http://www.weblogcartoons.com/2008/11/23/ideas/

Those of you who follow this blog may have noticed a lack of thematic coherence. I write about whatever is exercising my mind at the time, which can range from technical aspects of statistics to the design of bathroom taps. I decided it might be helpful to introduce a bit of order into this chaotic melange, so here is a catalogue of posts by topic.

Language impairment, dyslexia and related disorders
The common childhood disorders that have been left out in the cold (1 Dec 2010) What's in a name? (18 Dec 2010) Neuroprognosis in dyslexia (22 Dec 2010) Where commercial and clinical interests collide: Auditory processing disorder (6 Mar 2011) Auditory processing disorder (30 Mar 2011) Special educational needs: will they be met by the Green paper proposals? (9 Apr 2011) Is poor parenting really to blame for children's school problems? (3 Jun 2011) Early intervention: what's not to like? (1 Sep 2011) Lies, damned lies and spin (15 Oct 2011) A message to the world (31 Oct 2011) Vitamins, genes and language (13 Nov 2011) Neuroscientific interventions for dyslexia: red flags (24 Feb 2012) Phonics screening: sense and sensibility (3 Apr 2012) What Chomsky doesn't get about child language (3 Sept 2012) Data from the phonics screen (1 Oct 2012) Auditory processing disorder: schisms and skirmishes (27 Oct 2012) High-impact journals (Action video games and dyslexia: critique) (10 Mar 2013) Overhyped genetic findings: the case of dyslexia (16 Jun 2013) The arcuate fasciculus and word learning (11 Aug 2013) Changing children's brains (17 Aug 2013) Raising awareness of language learning impairments (26 Sep 2013) Good and bad news on the phonics screen (5 Oct 2013) What is educational neuroscience? (25 Jan 2014) Parent talk and child language (17 Feb 2014) My thoughts on the dyslexia debate (20 Mar 2014) Labels for unexplained language difficulties in children (23 Aug 2014) International reading comparisons: Is England really do so poorly? (14 Sep 2014) Our early assessments of schoolchildren are misleading and damaging (4 May 2015) Opportunity cost: a new red flag for evaluating interventions (30 Aug 2015) The STEP Physical Literacy programme: have we been here before? (2 Jul 2017) Prisons, developmental language disorder, and base rates (3 Nov 2017) Reproducibility and phonics: necessary but not sufficient (27 Nov 2017) Developmental language disorder: the need for a clinically relevant definition (9 Jun 2018)

Autism
Autism diagnosis in cultural context (16 May 2011) Are our ‘gold standard’ autism diagnostic instruments fit for purpose? (30 May 2011) How common is autism? (7 Jun 2011) Autism and hypersystematising parents (21 Jun 2011) An open letter to Baroness Susan Greenfield (4 Aug 2011) Susan Greenfield and autistic spectrum disorder: was she misrepresented? (12 Aug 2011) Psychoanalytic treatment for autism: Interviews with French analysts (23 Jan 2012) The ‘autism epidemic’ and diagnostic substitution (4 Jun 2012) How wishful thinking is damaging Peta's cause (9 June 2014) NeuroPointDX's blood test for Autism Spectrum Disorder ( 12 Jan 2019)

Developmental disorders/paediatrics
The hidden cost of neglected tropical diseases (25 Nov 2010) The National Children's Study: a view from across the pond (25 Jun 2011) The kids are all right in daycare (14 Sep 2011) Moderate drinking in pregnancy: toxic or benign? (21 Nov 2012) Changing the landscape of psychiatric research (11 May 2014)

Genetics
Where does the myth of a gene for things like intelligence come from? (9 Sep 2010) Genes for optimism, dyslexia and obesity and other mythical beasts (10 Sep 2010) The X and Y of sex differences (11 May 2011) Review of How Genes Influence Behaviour (5 Jun 2011) Getting genetic effect sizes in perspective (20 Apr 2012) Moderate drinking in pregnancy: toxic or benign? (21 Nov 2012) Genes, brains and lateralisation (22 Dec 2012) Genetic variation and neuroimaging (11 Jan 2013) Have we become slower and dumber? (15 May 2013) Overhyped genetic findings: the case of dyslexia (16 Jun 2013) Incomprehensibility of much neurogenetics research ( 1 Oct 2016) A common misunderstanding of natural selection (8 Jan 2017) Sample selection in genetic studies: impact of restricted range (23 Apr 2017) Pre-registration or replication: the need for new standards in neurogenetic studies (1 Oct 2017) Review of 'Innate' by Kevin Mitchell ( 15 Apr 2019)

Neuroscience
Neuroprognosis in dyslexia (22 Dec 2010) Brain scans show that… (11 Jun 2011)  Time for neuroimaging (and PNAS) to clean up its act (5 Mar 2012) Neuronal migration in language learning impairments (2 May 2012) Sharing of MRI datasets (6 May 2012) Genetic variation and neuroimaging (1 Jan 2013) The arcuate fasciculus and word learning (11 Aug 2013) Changing children's brains (17 Aug 2013) What is educational neuroscience? ( 25 Jan 2014) Changing the landscape of psychiatric research (11 May 2014) Incomprehensibility of much neurogenetics research ( 1 Oct 2016)

Reproducibility
Accentuate the negative (26 Oct 2011) Novelty, interest and replicability (19 Jan 2012) High-impact journals: where newsworthiness trumps methodology (10 Mar 2013) Who's afraid of open data? (15 Nov 2015) Blogging as post-publication peer review (21 Mar 2013) Research fraud: More scrutiny by administrators is not the answer (17 Jun 2013) Pressures against cumulative research (9 Jan 2014) Why does so much research go unpublished? (12 Jan 2014) Replication and reputation: Whose career matters? (29 Aug 2014) Open code: note just data and publications (6 Dec 2015) Why researchers need to understand poker ( 26 Jan 2016) Reproducibility crisis in psychology ( 5 Mar 2016) Further benefit of registered reports ( 22 Mar 2016) Would paying by results improve reproducibility? ( 7 May 2016) Serendipitous findings in psychology ( 29 May 2016) Thoughts on the Statcheck project ( 3 Sep 2016) When is a replication not a replication? (16 Dec 2016) Reproducible practices are the future for early career researchers (1 May 2017) Which neuroimaging measures are useful for individual differences research? (28 May 2017) Prospecting for kryptonite: the value of null results (17 Jun 2017) Pre-registration or replication: the need for new standards in neurogenetic studies (1 Oct 2017) Citing the research literature: the distorting lens of memory (17 Oct 2017) Reproducibility and phonics: necessary but not sufficient (27 Nov 2017) Improving reproducibility: the future is with the young (9 Feb 2018) Sowing seeds of doubt: how Gilbert et al's critique of the reproducibility project has played out (27 May 2018) Preprint publication as karaoke ( 26 Jun 2018) Standing on the shoulders of giants, or slithering around on jellyfish: Why reviews need to be systematic ( 20 Jul 2018) Matlab vs open source: costs and benefits to scientists and society ( 20 Aug 2018)  

Statistics
Book review: biography of Richard Doll (5 Jun 2010) Book review: the Invisible Gorilla (30 Jun 2010) The difference between p < .05 and a screening test (23 Jul 2010) Three ways to improve cognitive test scores without intervention (14 Aug 2010) A short nerdy post about the use of percentiles (13 Apr 2011) The joys of inventing data (5 Oct 2011) Getting genetic effect sizes in perspective (20 Apr 2012) Causal models of developmental disorders: the perils of correlational data (24 Jun 2012) Data from the phonics screen (1 Oct 2012)Moderate drinking in pregnancy: toxic or benign? (1 Nov 2012) Flaky chocolate and the New England Journal of Medicine (13 Nov 2012) Interpreting unexpected significant results (7 June 2013) Data analysis: Ten tips I wish I'd known earlier (18 Apr 2014) Data sharing: exciting but scary (26 May 2014) Percentages, quasi-statistics and bad arguments (21 July 2014) Why I still use Excel ( 1 Sep 2016) Sample selection in genetic studies: impact of restricted range (23 Apr 2017) Prospecting for kryptonite: the value of null results (17 Jun 2017) Prisons, developmental language disorder, and base rates (3 Nov 2017) How Analysis of Variance Works (20 Nov 2017) ANOVA, t-tests and regression: different ways of showing the same thing (24 Nov 2017) Using simulations to understand the importance of sample size (21 Dec 2017) Using simulations to understand p-values (26 Dec 2017) One big study or two small studies? ( 12 Jul 2018)

Journalism/science communication
Orwellian prize for scientific misrepresentation (1 Jun 2010) Journalists and the 'scientific breakthrough' (13 Jun 2010) Science journal editors: a taxonomy (28 Sep 2010) Orwellian prize for journalistic misrepresentation: an update (29 Jan 2011) Academic publishing: why isn't psychology like physics? (26 Feb 2011) Scientific communication: the Comment option (25 May 2011)  Publishers, psychological tests and greed (30 Dec 2011) Time for academics to withdraw free labour (7 Jan 2012) 2011 Orwellian Prize for Journalistic Misrepresentation (29 Jan 2012) Time for neuroimaging (and PNAS) to clean up its act (5 Mar 2012) Communicating science in the age of the internet (13 Jul 2012) How to bury your academic writing (26 Aug 2012) High-impact journals: where newsworthiness trumps methodology (10 Mar 2013)  A short rant about numbered journal references (5 Apr 2013) Schizophrenia and child abuse in the media (26 May 2013) Why we need pre-registration (6 Jul 2013) On the need for responsible reporting of research (10 Oct 2013) A New Year's letter to academic publishers (4 Jan 2014) Journals without editors: What is going on? (1 Feb 2015) Editors behaving badly? (24 Feb 2015) Will Elsevier say sorry? (21 Mar 2015) How long does a scientific paper need to be? (20 Apr 2015) Will traditional science journals disappear? (17 May 2015) My collapse of confidence in Frontiers journals (7 Jun 2015) Publishing replication failures (11 Jul 2015) Psychology research: hopeless case or pioneering field? (28 Aug 2015) Desperate marketing from J. Neuroscience ( 18 Feb 2016) Editorial integrity: publishers on the front line ( 11 Jun 2016) When scientific communication is a one-way street (13 Dec 2016) Breaking the ice with buxom grapefruits: Pratiques de publication and predatory publishing (25 Jul 2017) Should editors edit reviewers? ( 26 Aug 2018)

Social Media
A gentle introduction to Twitter for the apprehensive academic (14 Jun 2011) Your Twitter Profile: The Importance of Not Being Earnest (19 Nov 2011) Will I still be tweeting in 2013? (2 Jan 2012) Blogging in the service of science (10 Mar 2012) Blogging as post-publication peer review (21 Mar 2013) The impact of blogging on reputation ( 27 Dec 2013) WeSpeechies: A meeting point on Twitter (12 Apr 2014) Email overload ( 12 Apr 2016) How to survive on Twitter - a simple rule to reduce stress (13 May 2018)

Academic life
An exciting day in the life of a scientist (24 Jun 2010) How our current reward structures have distorted and damaged science (6 Aug 2010) The challenge for science: speech by Colin Blakemore (14 Oct 2010) When ethics regulations have unethical consequences (14 Dec 2010) A day working from home (23 Dec 2010) Should we ration research grant applications? (8 Jan 2011) The one hour lecture (11 Mar 2011) The expansion of research regulators (20 Mar 2011) Should we ever fight lies with lies? (19 Jun 2011) How to survive in psychological research (13 Jul 2011) So you want to be a research assistant? (25 Aug 2011) NHS research ethics procedures: a modern-day Circumlocution Office (18 Dec 2011) The REF: a monster that sucks time and money from academic institutions (20 Mar 2012) The ultimate email auto-response (12 Apr 2012) Well, this should be easy…. (21 May 2012) Journal impact factors and REF2014 (19 Jan 2013)  An alternative to REF2014 (26 Jan 2013) Postgraduate education: time for a rethink (9 Feb 2013)  Ten things that can sink a grant proposal (19 Mar 2013)Blogging as post-publication peer review (21 Mar 2013) The academic backlog (9 May 2013)  Discussion meeting vs conference: in praise of slower science (21 Jun 2013) Why we need pre-registration (6 Jul 2013) Evaluate, evaluate, evaluate (12 Sep 2013) High time to revise the PhD thesis format (9 Oct 2013) The Matthew effect and REF2014 (15 Oct 2013) The University as big business: the case of King's College London (18 June 2014) Should vice-chancellors earn more than the prime minister? (12 July 2014)  Some thoughts on use of metrics in university research assessment (12 Oct 2014) Tuition fees must be high on the agenda before the next election (22 Oct 2014) Blaming universities for our nation's woes (24 Oct 2014) Staff satisfaction is as important as student satisfaction (13 Nov 2014) Metricophobia among academics (28 Nov 2014) Why evaluating scientists by grant income is stupid (8 Dec 2014) Dividing up the pie in relation to REF2014 (18 Dec 2014)  Shaky foundations of the TEF (7 Dec 2015) A lamentable performance by Jo Johnson (12 Dec 2015) More misrepresentation in the Green Paper (17 Dec 2015) The Green Paper’s level playing field risks becoming a morass (24 Dec 2015) NSS and teaching excellence: wrong measure, wrongly analysed (4 Jan 2016) Lack of clarity of purpose in REF and TEF ( 2 Mar 2016) Who wants the TEF? ( 24 May 2016) Cost benefit analysis of the TEF ( 17 Jul 2016)  Alternative providers and alternative medicine ( 6 Aug 2016) We know what's best for you: politicians vs. experts (17 Feb 2017) Advice for early career researchers re job applications: Work 'in preparation' (5 Mar 2017) Should research funding be allocated at random? (7 Apr 2018) Power, responsibility and role models in academia (3 May 2018) My response to the EPA's 'Strengthening Transparency in Regulatory Science' (9 May 2018) More haste less speed in calls for grant proposals ( 11 Aug 2018) Has the Society for Neuroscience lost its way? ( 24 Oct 2018) The Paper-in-a-Day Approach ( 9 Feb 2019) Benchmarking in the TEF: Something doesn't add up ( 3 Mar 2019) The Do It Yourself conference ( 26 May 2019)  

Celebrity scientists/quackery
Three ways to improve cognitive test scores without intervention (14 Aug 2010) What does it take to become a Fellow of the RSM? (24 Jul 2011) An open letter to Baroness Susan Greenfield (4 Aug 2011) Susan Greenfield and autistic spectrum disorder: was she misrepresented? (12 Aug 2011) How to become a celebrity scientific expert (12 Sep 2011) The kids are all right in daycare (14 Sep 2011)  The weird world of US ethics regulation (25 Nov 2011) Pioneering treatment or quackery? How to decide (4 Dec 2011) Psychoanalytic treatment for autism: Interviews with French analysts (23 Jan 2012) Neuroscientific interventions for dyslexia: red flags (24 Feb 2012) Why most scientists don't take Susan Greenfield seriously (26 Sept 2014) NeuroPointDX's blood test for Autism Spectrum Disorder ( 12 Jan 2019)

Women
Academic mobbing in cyberspace (30 May 2010) What works for women: some useful links (12 Jan 2011) The burqua ban: what's a liberal response (21 Apr 2011) C'mon sisters! Speak out! (28 Mar 2012) Psychology: where are all the men? (5 Nov 2012) Should Rennard be reinstated? (1 June 2014) How the media spun the Tim Hunt story (24 Jun 2015)

Politics and Religion
Lies, damned lies and spin (15 Oct 2011) A letter to Nick Clegg from an ex liberal democrat (11 Mar 2012) BBC's 'extensive coverage' of the NHS bill (9 Apr 2012) Schoolgirls' health put at risk by Catholic view on vaccination (30 Jun 2012) A letter to Boris Johnson (30 Nov 2013) How the government spins a crisis (floods) (1 Jan 2014) The alt-right guide to fielding conference questions (18 Feb 2017) We know what's best for you: politicians vs. experts (17 Feb 2017) Barely a good word for Donald Trump in Houses of Parliament (23 Feb 2017) Do you really want another referendum? Be careful what you wish for (12 Jan 2018) My response to the EPA's 'Strengthening Transparency in Regulatory Science' (9 May 2018) What is driving Theresa May? ( 27 Mar 2019)

Humour and miscellaneous Orwellian prize for scientific misrepresentation (1 Jun 2010) An exciting day in the life of a scientist (24 Jun 2010) Science journal editors: a taxonomy (28 Sep 2010) Parasites, pangolins and peer review (26 Nov 2010) A day working from home (23 Dec 2010) The one hour lecture (11 Mar 2011) The expansion of research regulators (20 Mar 2011) Scientific communication: the Comment option (25 May 2011) How to survive in psychological research (13 Jul 2011) Your Twitter Profile: The Importance of Not Being Earnest (19 Nov 2011) 2011 Orwellian Prize for Journalistic Misrepresentation (29 Jan 2012) The ultimate email auto-response (12 Apr 2012) Well, this should be easy…. (21 May 2012) The bewildering bathroom challenge (19 Jul 2012) Are Starbucks hiding their profits on the planet Vulcan? (15 Nov 2012) Forget the Tower of Hanoi (11 Apr 2013) How do you communicate with a communications company? ( 30 Mar 2014) Noah: A film review from 32,000 ft (28 July 2014) The rationalist spa (11 Sep 2015) Talking about tax: weasel words ( 19 Apr 2016) Controversial statues: remove or revise? (22 Dec 2016) The alt-right guide to fielding conference questions (18 Feb 2017) My most popular posts of 2016 (2 Jan 2017) An index of neighbourhood advantage from English postcode data ( 15 Sep 2018) Working memories: A brief review of Alan Baddeley's memoir ( 13 Oct 2018)

Sunday, 26 May 2019

The Do It Yourself (DIY) conference


This blogpost was inspired by a tweet from Natalie Jester, a PhD student at the School for Sociology, Politics and International Studies at the University of Bristol, raising this question:


I agreed with her, noting that the main costs were venue hire and speaker expenses, but that prices were often hiked by organisers using lavish venues and aiming to make a profit from the meeting. I linked to my earlier post about the eye-watering profits that the Society for Neuroscience makes from its meetings.  In contrast, the UK's Experimental Psychology Society uses its income membership fees and the journal to support meetings three times a year, and doesn't even charge a registration fee.

Pradeep Reddy Raamana, a Canadian Open Neuroscience scholar from Toronto responded, drawing my attention to a thread on this very topic from a couple of weeks ago.



There were useful suggestions in the thread, including reducing costs by spending less on luxurious accommodation for organisers, and encouraging PIs to earmark funds for their junior staff to cover their conference attendance costs.

That's all fine, but my suggestion is for a radically different approach, which is to find a small group of 2-3 like-minded people and organise your own conference. I'm sure that people will respond by saying that they have to go to the big society meetings in their field in order to network and promote their research.  There's nothing in my suggestions that would preclude you also doing this (though see climate emergency point below). But I suspect that if you go down the DIY route, you may get a lot more out of the experience than you would by attending a big, swish society conference: both in terms of personal benefits and career prospects.

I'm sure people will want to add to these ideas, but here's my experience, which is based on running various smallish meetings, including being local organiser for occasional EPS meetings over the years. I was also, with Katharine Perera, Gina Conti-Ramsden and Elena Lieven,  a co-organiser of the Child Language Seminar (CLS) in Manchester back in the 1980s.  That is perhaps the best example of a DIY conference, because we had no infrastructure and just worked it out as we went along.  The CLS was a very ad hoc thing: each year, the meeting organisers tried to find someone who was prepared to run the next CLS at their own institution the following year. Despite this informality, the CLS – now with the more appropriate name of Child Language Symposium – is still going strong in 2019. From memory, we had around 120 people from all over the world at the Manchester meeting. Numbers have grown over the years, but in general if you were doing a DIY meeting for the first time, I'd aim to keep it small; no more than 200 people.

The main costs you will incur in organising a meeting are:
  • Venue
  • Refreshments
  • Reception/Conference dinner
  • Expenses for speakers
  • Administrative costs
  • Publicity
Your income to cover these costs will come from:
  • Grants (optional)
  • Registration fees

So the main thing to do at the start is to sit down and do some sums to ensure you will break even. Here's my experiences on each of these categories:

Venue

You do not need to hold the meeting at a swanky hotel. Your university is likely to have conference facilities: check out their rates. Consider what you need in terms of lecture theatre capacity, break-out rooms, rooms for posters/refreshments.  You need to factor in cost of technical support. My advice is you should let people look after their own accommodation: at most just give them a list of places to stay. This massively cuts down on your workload.

Refreshments

The venue should be able to offer teas/coffees. You will probably be astounded at what institutions charge for a cup of instant coffee and a boring biscuit, but I recommend you go with the flow on that one. People do need their coffee breaks, however humble the refreshments.

Reception/Conference dinner

A welcome reception is a good way of breaking the ice on the first evening. It need not be expensive: a few bottles of wine plus water and soft drinks and some nibbles is adequate. You could just find a space to do this and provide the refreshments yourselves: most of the EPS meetings I've been to just have some bottles provided and people help themselves. This will be cheaper than rates from conference organisers.

You don't have to have a conference dinner. They can be rather stuffy affairs, and a torment for shy people who don't know anyone. On the other hand, when they work well, they provide an opportunity to get to know people and chat about work informally. My experience at EPS and CLS is that the easiest way to organise this is to book a local restaurant. They will probably suggest a set meal at a set price, with people selecting options in advance. This will involve some admin work – see below.

Expenses for speakers

For a meeting like CLS there are a small number of invited plenary speakers. This is your opportunity to invite the people you really want to hear from. It's usual to offer economy class travel and accommodation in a good hotel. This does not need to be lavish, but it should have quiet rooms with ensuite bathroom, large, comfortable bed, desk area, sufficient power supply, adequate aircon/heating, and free wifi. Someone who has flown around the world to come to your meeting is not going to remember you fondly if they are put up in a cramped bed and breakfast. I've had some dismal experiences over the years and now check TripAdvisor to make sure I've not been booked in somewhere awful.  I still remember attending a meeting where an eminent speaker had flown in from North America only to find herself put in student accommodation: she turned around and booked herself into a hotel, and left with dismal memories of the organisers.

Pradeep noted that conferences could save costs if speakers covered their own expenses. This is true and many do have funds that they could use for this purpose. But don't assume that is the case: if they do have funds, you'd have to consider why they'd rather spend that money on coming to your meeting, than on something else. A diplomatic way of discussing this is to say in the letter of invitation that you can cover economy class travel, accommodation, dinner and registration. However, if they have funds that could be used for their travel, then that will make it possible to offer some sponsored places to students.

Administration

It's easy to overlook this item, but fortunately it is now relatively simple to handle registrations with online tools such as EventBrite. They take a cut if you charge for registration, but that's well worth it in my experience, in terms of saving a lot of grief with spreadsheets. If you are going for a conference dinner, then booking for this can be bundled in with registration fee.

In the days of Manchester CLS, email barely existed and nobody expected a conference website, but nowadays that is mandatory, and so you will need someone willing to set it up and populate it with information about venue and programme. As with my other advice, no need to make it fancy; just ensure there is the basic information that people need, with a link to a place for registration.

There are further items like setting up the Eventbrite page, making conference badges, and ensuring smooth communications with venue, speakers and restaurant. Here the main thing is to delegate responsibility so everyone knows what they have to do. I've quite often experienced the situation where I've agreed to speak at a meeting only to find that nobody has contacted me about the programme or venue and it's only a week to go.

On the day, you'll be glad of assistants who can do things like shepherding people into sessions, taking messages, etc. You can offer free registration to local students in return for them acting in this role.

Publicity

I've listed this under costs, but I've never spent on this for meetings I've organised, and given social media, I don't think you'll need to.

Grants

I've put optional for grants, as you can cover costs without a grant. But every bit of money helps and it's possible that one of the organisers will have funding that can be used. However, my advice is to check out options for grant funding from a society or other funder. National funding bodies such as UK research councils or NIH may have pots of money you can apply for: the sums are typically small and applying for the money is not onerous. Even if a society doesn't have a grants stream for meetings, they may be willing to sponsor places for specific categories of attendees: early-career people or those from resource-poor countries.

Local businesses or publishers are often willing to sponsor things like conference bags, in return for showing their logo. You can often charge publishers for a stand.

Registration

Once you have thought through the items under Expenditure, and have an idea of whether you'll have grant income, you will be in a good position to work out what you need to charge those attending to cover your costs. The ideal is to break even, but it's important not to overspend and so you should estimate how many people are likely to register in each category, and work out a registration fee that will cover this, even if numbers are disappointing.

What can go wrong?

  • Acts of God. I still remember a meeting at the Royal Society years ago where a hurricane swept across Britain overnight and around 50% of those attending couldn't make it. Other things like strikes, riots, etc. can happen, but I recommend you just accept these are risks not under your control.
  • Clash of dates. This is under your control to some extent. Before you settle on a date, ask around to check there isn't a clash with other meetings or with religious holidays.
  • Speaker pulls out. I have organised meetings where a speaker pulled out at the last minute – there will usually be a good reason for this such as illness. So long as it is one person, this can be managed, and may indeed provide an opportunity to do something useful with the time, such as holding a mini-Hackathon to brainstorm ideas about a specific problem..
  • You make a loss. This is a scary prospect but should not happen with adequate planning, as noted above. Main thing is to make sure you confirm what your speaker expenses will be so you don't get any nasty surprises at the last minute.
  • Difficult people. This is a minor one, but I remember wise words of Betty Byers Brown, a collaborator from those old Manchester days, who told me that 95% of the work of a conference organiser is caused by 5% of those attending. Just knowing that is the case makes it easier to deal with.
  • Unhappy people. People coming from far away who know nobody can have a miserable time at a conference, but with planning, you can help them integrate in a group. Rather than formal entertainment, consider having social activities that ensure everyone is included. Also, have an explicit anti-harassment policy – there are plenty of examples on the web.
  • Criticism. Whatever you do there will be people who complain – why didn't you do X rather than Y?  This can be demoralising if you have put a lot of work into organising something.  Nevertheless, make sure you do ask people for feedback after the meeting: if there are things that could be done better next time, you need to know about them. For what it's worth, the most common complaints I hear after meetings are that speakers go on too long and there is not enough time for questions and discussion. It's important to have firm chairing, and to set up the schedule to encourage interaction.

What can go right?

  • Running a conference carries an element of risk and stress, but it's an opportunity to develop organisational skills, and this can be a great thing to put on your CV. The skills you need to plan a conference are not so different from those to budget for a grant: you have to work out how to optimise the use of funds, anticipating expenses and risks.
  • Bonding with co-organisers. If you pick your co-organisers wisely, you may find that the experience of working together to solve problems is enjoyable and you learn a lot.
  • You can choose the topics for your meeting and get to invite the speakers you most want to hear. As a young researcher organising a small meeting, I got to know people I'd invited as speakers in a way that would not be possible if I was just attending a big meeting organised by a major society.
  • You can do it your way. You can decide if you want to lower costs for specific groups. You can make sure that the speakers are diverse, and can experiment with different approaches to get away from the traditional format of speakers delivering a lecture to an audience. For examples see this post and comments below it.
  • The main thing is that if you are in control, you can devise your meeting to ensure it achieves what scientific meetings are supposed to achieve: scholarly communication and interaction to spark ideas and collaborations. My memories of meetings I have organised as an early-career academic have been high points in my career, which is why I am so keen to encourage others to do this.

But! .... Climate emergency

The elephant in this particular room is air travel. Academics are used to zipping around the world to go to conferences, at a time when we are increasingly recognising the harm this is doing to our planet. My only justification for writing this post at the current time is that it may encourage people to go to smaller, more-focused meetings. But I'm trying to cut down on air travel substantially and in the longer term, suspect that we will need to move to virtual meetings.

Groups of younger researchers, and those from outside Europe and the UK, have a role to play in working out how to do this. I hope to encourage this by urging people to be bold and to venture outside the big conference arenas where junior people and those from marginalised groups can feel they are invisible. Organising a small meeting teaches you a lot of necessary skills that may be used in devising more radical formats. The future of conferences is going to change and you need to be shaping it.

-->