Tuesday, 26 December 2017

Using simulations to understand p-values

Intuitive explanations of statistical concepts for novices #4

The p-value is widely used but widely misunderstood. I'll demonstrate this in the context of intervention studies. The key question is how confident can we be that an apparently beneficial effect of treatment reflects a change due to the intervention, rather than arising just through the play of chance. The p-value gives one way of deciding that. There are other approaches, including those based on Bayesian statistics, which are preferred by many statisticians. But I will focus here on the traditional null hypothesis significance testing (NHST) approach, which dominates statistical reporting in many areas of science, and which uses p-values.

As illustrated in my previous blogpost, where our measures include random noise, the distorting effects of chance mean that we can never be certain whether or not a particular pattern of data reflects a real difference between groups. However, we can compute the probability that the data came from a sample where there was no effect of intervention.

There are two ways to do this. One way is by simulation. If you repeatedly run the kind of simulation described in my previous blogpost, specifying no mean difference between groups, each time taking a new sample, for each result you can compute a standardized effect size. Cohen's d is the mean difference between groups expressed in standard deviation units, which can be computed by subtracting the group A mean from the group B mean, and dividing by the pooled standard deviation (i.e. the square root of the average of the variances for the two groups). You then see how often the simulated data give an effect size at least as large as the one observed in your experiment.
Histograms of effecct sizes obtained by repeatedly sampling from population where there is no difference between groups*
Figure 1 shows the distribution of effect sizes for two different studies: the first has 10 participants per group, and the second has 80 per group. For each study, 10,000 simulations were run; on each run, a fresh sample was taken from the population, and the standardized effect size, d, computed for that run. The peak of each distribution is at zero: we expect this, as we are simulating the case of no real difference between groups – the null hypothesis. But note that, though the shape of the distribution is the same for both studies, the scale on the x-axis covers a broader range for the study with 10 per group than the study with 80 per group. This relates to the phenomenon shown in Figure 5 of the previous blogpost, whereby estimates of group means jump around much more when there is a small sample.

The dotted red lines show the cutoff points that identify the top 5%, 1% and 0.1% of the effect sizes. Suppose we ran a study with 10 people and it gave a standardized effect size of 0.3. We can see from the figure that a value in this range is fairly common when there is no real effect: around 25% of the simulations gave an effect size of at least 0.3. However, if our study had 80 people per group, then the simulation tells us this is an improbable result to get if there really is no effect of intervention: only 2.7% of simulations yield an effect size as big as this.

The p-value is the probability of obtaining a result at least as extreme as the one that is observed, if there really is no difference between groups. So for the study with N = 80, p = .027. Conventionally, a level of p < .05 has been regarded as 'statistically significant', but this is entirely arbitrary. There is an inevitable trade-off between false positives (type I errors) and false negatives (type II errors). If it is very important to avoid false positives, and you do not mind sometimes missing a true effect, then a stringent p-value is desirable. If, however, you do not want to miss any finding of potential interest, even if it turns out to be a false positive, then you could adopt a more lenient criterion.

The comparison between the two sample sizes in Figure 1 should make it clear that statistical significance is not the same thing as practical significance. Statistical significance simply tells us how improbable a given result would be if there was no true effect. The larger the sample size, the smaller the effect size that would be detected at a threshold such as p < .05. Small samples are generally a bad thing, because they only allow us to reliably detect very large effects. But very large samples have the opposite problem: they allow us to detect as 'significant' effect that are so small as to be trivial. The key point that the researcher who is conducting an intervention study should start by considering how big an effect would be of practical interest, given the cost of implementing the intervention. For instance, you may decide that staff training and time spent on a vocabulary intervention would only be justified if it boosted children's vocabulary by at least 10 words. If you knew how variable children scores were on the outcome measure, the sample size could then be determined so that the study has a good chance of detecting that effect while minimising false positives. I will say more about how to do that in a future post.

I've demonstrated p-values using simulations in the hope that this will give some insight into how they are derived and what they mean. In practice, we would not normally derive p-values this way, as there are much simpler ways to do this, using statistical formulae. Provided that data are fairly normally distributed, we can use statistical approaches such as ANOVA, t-tests and linear regression to compute probabilities of observed results (see this blogpost). Simulations can, however, be useful in two situations. First, if you don't really understand how a statistic works, you can try running an analysis with simulated data. You can either simulate the null hypothesis by creating data from two groups that do not differ, or you can add a real effect of a given size to one group. Because you know exactly what effect size was used to create the simulated data, you can get a sense of whether particular statistics are sensitive to detect real effects, and how these might vary with sample size.

The second use of simulations is for situations where the assumptions of statistical tests are not met – for instance, if data are not normally distributed, or if you are using a complex design that incorporates multiple interacting variables. If you can simulate a population of data that has the properties of your real data, you can then repeatedly sample from this and compute the probability of obtaining your observed result to get a direct estimate of a p-value, just as was done above.

The key point to grasp about a p-value is that it tells you how likely your observed evidence is, if the null hypothesis is true. The most widely used p-value is .05: if the p-value in your study is less than .05, then the chance of your observed data arising when the intervention had no effect is 1 in 20. You may decide on that basis that it's worth implementing the intervention, or at least investing in the costs of doing further research on it.

The most common mistake is to think that the p-value tells you how likely the null hypothesis is given the evidence. But that is something else. The probability of A (observed data) given B (null hypothesis) not the same as the probability of B (null hypothesis) given A (observed data). As I have argued in another blogpost, the probability that if you are a man you are a criminal is not high, but if you are a criminal, the probability that you are a man is much higher. This may seem fiendishly complicated, but a concrete example can help.

Suppose Bridget Jones has discovered three weight loss pills: if taken for a month, pill A is totally ineffective placebo, pill B leads to a modest weight loss of 2 lbs, and pill C leads to an average loss of 7 lb. We do studies with three groups of 20 people; in each group, half are given A, B or C and the remainder are untreated controls. We discover that after a month, one of the treated groups has an average weight loss of 3 lb, whereas their control group has lost no weight at all. We don't know which pill this group received. If we run a statistical test, we find the p-value is .45. This means we cannot reject the null hypothesis of no effect – which is what we'd expect if this group had been given the placebo pill, A. But the result is also compatible with the participants having received pills B or C. This is demonstrate in Figure 2 which shows the probability density function for each scenario - in effect, the outline of the histogram. The red dotted line corresponds to our obtained result, and it is clear it is highly probable regardless of which pill was used. In short, this result doesn't tell us how likely the null hypothesis is – only that the null hypothesis is compatible with the evidence that we have.
Probability density function for weight loss pills A, B and C, with red line showing observed result

Many statisticians and researchers have argued we should stop using p-values, or at least adopt more stringent levels of p. My view is that p-values can play a useful role in contexts such as the one I have simulated here, where you want to decide whether an intervention is worth adopting, provided you understand what they tell you. It is crucial to appreciate how dependent a p-value is on sample size, and to recognise that the information it provides is limited to telling you whether an observed difference could just be due to chance. In a later post I'll go on to discuss the most serious negative consequence of misunderstanding of p-values: the generation of false positive findings by the use of p-hacking.

*The R script to generate Figures 1 and 2 can be found here.

8 comments:

  1. "The most common mistake is to think that the p-value tells you how likely the null hypothesis is given the evidence."

    This is like a stab in the heart, realizing that Im making this mistake all the time...

    So to tell how likely it is the H0 given the evidence we'd need full blown bayesian analysis with prior and everything? Or is there another way?

    ReplyDelete
    Replies
    1. I can recommend the free Coursera course by Daniel Lakens 'Improving Your Statistical Inferences' for a very clear explanation.
      In addition, David Colquhoun has much to say on this matter and has been attempting to post a comment here, which Blogger rejected because it had several links. I will add details of that as well.

      Delete
    2. Thank you. Taking that course has been on my wish list for a while...

      Delete
    3. You certainly can't say anything much about the probability of H0 without using Bayes' theorem. One problem is that there is no single "full-blown Bayesian analysis". There's an infinitude of them. Nevertheless it seems to me to be folly to ignore the prior probability, despite the fact that you don't know it. My version is perhaps the simplest Bayesian approach, but it gives results that are quite close to at least two fancier approaches with a lot less maths.

      Delete
  2. Here is David Colquhoun's paper on the topic: http://rsos.royalsocietypublishing.org/content/4/12/171085.article-info
    He also had additional comments, which I hope he will be able to add if the weblinks are omitted.

    ReplyDelete
  3. Thanks Dorothy. I didn't realise that there was a limit on links.

    I think that is potentially misleading to refer to Type 1 errors as false positives. I say that because what really matters is the probability that your result is a false positive, your false positive risk. In order to get this you need to know the total number of positives, false and true, as illustrated in Fig 2 of my 2014 paper. I say that the false positive risk is what people really want because, sadly, most users still seem to think it is what the p-value tells you.

    I suggest that better ways to put the result are as follows. Using your numbers, p = 0.027, n = 80 and d = 0.3, you can calculate that you'd have to be 77% sure that there was a real effect before you did the experiment in order to achieve a false positive risk of 0.05. An alternative way of putting this is to note that your minimum false positive risk would be 15% (that's for prior odds of 1). These numbers certainly show the weakness of the evidence provided by p = 0.027.

    The assumptions behind these calculations are given in my 2017 paper (thanks for linking to that).

    Despite your best efforts, it's disappointing that few people seem to download R scripts. So we made a web calculator which makes it easy to get the numbers that I cite above (you can even do it on your phone)
    The web calculator is at: http://fpr-calc.ucl.ac.uk/

    ReplyDelete
  4. I am unhappy with the statement that my use of terminology is 'potentially misleading'. I define 'type I error' and 'false positive' in a standard fashion - anyone who is confused by David's statement can see, for instance, this explainer by stats guru Doug Altman http://www.equator-network.org/wp-content/uploads/2014/11/Sample_size_calculation_Doug_Altman.pdf

    Although he appears to be criticising inaccurate use of terminology, it seems David is actually objecting to the topic of the blogpost (i.e. how the p-value is determined from the null hypothesis) - because he deems this approach misleading. Anyone who reads my post carefully should be able to see that I explained that the conventional use of p-values was misunderstood by many and I explicitly noted that it does not tell you the probability that your result is a false positive - which is the focus of David's concern.

    These are indeed separate matters and the latter is one I aim to cover in a later post.

    ReplyDelete
  5. Please don't be unhappy. The bit that triggered my comment was " false positives (type I errors)". That sounds superficially like an example of transposing the conditional. I guess the problem is that"false positive" is used in two different senses. I would maintain that it's use as a synonym for type 1 error rate is not the relevant one for the interpretation of tests of significance.

    One thing that I have learned is that when discussing these matters one has to be obsessional about the use of words :-)

    I realise too, that this topic wasn't the main pint of this post. I'm looking forward to the next one.

    ReplyDelete