Wednesday 5 October 2011

The joys of inventing data


Have I gone over to the dark side? Cracked under pressure from the REF to resort to fabrication of results to secure that elusive Nature paper? Or had my brain addled by so many requests for information from ethics committees that I’ve just decided that its easier to be unethical? Well readers will be reassured to hear that none of these things is true. What I have to say concerns the benefits of made-up data for helping understand how to analyse real data.
In my field of experimental psychology, students get a thorough grounding in statistics and learn how to apply various methods for testing whether groups differ from one another, whether variables are associated and so on. But what they typically don’t get is any instruction in how to simulate datasets. This may be a historical hangover. When I first started out in the field, people didn’t have their own computers, and if you wanted to do an analysis you either laboriously assembled a set of instructions in Fortran which were punched onto cards and run on a mainframe computer (overnight if you were lucky), or you did the sums on a pocket calculator. Data simulation was just unfeasible for most people. Over the years, the landscape has changed beyond recognition and there are now windows-based applications that allow one to do complex multivariate statistics at the press of a button. There is a danger, however, which is that people do analyses without understanding them. And one of the biggest problems of all is a tendency to apply statistical analyses post hoc. You can tell people over and over that this is a Bad Thing (see Gould and Hardin, 2003) but they just don’t get it. A little simulation exercise can be worth a thousand words.
So here’s an illustration. Suppose we’ve got two groups each of 10 people, let’s say left-handers and right-handers. And we’ve given them a battery of 20 cognitive tests. When we scrutinise the results, we find that they don’t differ on most of the measures, but there’s a test of mathematical skill on which the left-handers outperform the right-handers. We do a t-test and are delighted to find that on this measure, the difference between groups is significant at the .05 level, so we write up a paper entitled "Left-handed advantage for mathematical skills" and submit it to a learned journal, not mentioning the other 19 tests. After all, they weren’t very interesting. Sounds OK? Well, it isn’t. We have fallen into the trap of using statistical methods that are valid for testing a hypothesis that is specified a priori in a situation where the hypothesis only emerged after scrutinising the data.
Let’s generate some data. Most people have access to Microsoft Excel, which is perfect for simple simulations. In row 1 we put our column labels, which are group, var1, var2, …. var 20.
In column A, we then have ten zeroes followed by ten ones, indicating group identity. We then use random numbers to complete the table. The simplest way to do this is to just type in each cell:
   =RAND()
This generates a random number between 0 and 1.
A more sophisticated option is to generate a random z-score. This creates random numbers that meet the assumption of many statistical tests that data are normally distributed. You do this by typing:
   =NORMSINV(RAND())
At the foot of each column you can compute the mean and standard deviation for each group, and Excel automatically computes a p-value based on the t-test for comparing the groups with a command such as:
=TTEST(B2:B11,B12:B22,2,2)
See this site if you need an explanation of this formula.
So the formulae in the first three columns look like this (rows 4-20 are hidden): 
Copy this formula across all columns. I added conditional formatting to row 27 so that ‘significant’ p-values are highlighted in yellow (and it just so happens with this example that the generated data gave a p-value less than .05 for column C).
Every time you type anything at all on the sheet, all the random numbers are updated: I’ve just added a row called ‘thisrun’ and typing any number in cell B29 will re-run the simulation.  This provides a simple way of generating a series of simulations and seeing when p-values fall below .05. On some runs, all the t-tests are nonsignificant, but you’ll quickly see that on many runs one or more p-values are below .05. In fact, on average, across numerous runs, the average number of significant values is going to be one because we have twenty columns, and 1/20 = .05. That’s what p < .05 means! If this doesn’t convince you of the importance of specifying your hypothesis in advance, rather than selecting data for analysis post hoc, nothing will.
This is a very simple example, but you can extend the approach to much more complicated analytic methods. It gets challenging in Excel if you want to generate correlated variables, though if you type a correlation coefficient in cell A1, and have a random number in column B, and copy this formula down from cell C2, then columns B and C will be correlated by the value in cell A1:
=B2*A$1+NORMSINV(RAND())*SQRT(1-A$1^2)
NB, you won’t get the exact correlation on each run: the precision will increase with the number of rows you simulate.
Other applications, such as Matlab or R, allow you to generate correlated data more easily. There are examples of simulating multivariate normal datasets in R in my blog on twin methods.
Simulation can be used not just for exploring a whole host of issues around statistical methods. For instance, you can simulate data to see how sample size affects results, or how results change if you fail to meet assumptions of a method. But overall, my message is that data simulation is a simple and informative approach to gaining understanding of statistical analysis. It should be used much more widely in training students.

Reference
Good, P. I., & Hardin, J. W. (2003). Common errors in statistics (and how to avoid them). Hoboken, NJ: Wiley.

14 comments:

  1. Wonderful post Dorothy, thanks - never thought of using Excel like that! Will definitely use some of this in my statistics teaching.

    ReplyDelete
  2. This is sheer brilliance. I hope you will forgive me using this in teaching (properly attributed of course). Thanks!

    ReplyDelete
  3. Excel is pretty good, and the advantage is that everyone knows how to use it.

    R or MATLAB are clearly more powerful but I think, when teaching students, what you want is for them to actually generate the data themselves - so they can see it really is random, and how easy it is for random data to produce significant results.

    ReplyDelete
  4. Very nice posts but you don't provide the solution to this problem

    I guess that the solution is either to correct for multiple comparisons (looking for p-values smaller than 0.05/20) or to look for an interaction between tests (20 factors) and groups (left-handed and right handed people). So what's the best solution?

    Also, I really enjoyed the paper form Niewenhuis and colleagues in Nature Neuroscience about how neuroscientists failed to look for interactions in their data. (Nieuwenhuis, S. et al. 2011: doi:10.1038/nn.2886)

    ReplyDelete
  5. TTEST(B2:B11,B12:B22,2,2)

    ReplyDelete
  6. Thanks to Anonymous for pointing out error in formula: now corrected. Wiping egg from face.

    JJ: Most stats books advise on how to deal with multiple comparisons, but best solution depends on your research design and exactly what hypothesis you are testing. A common approach is simply to adjust p-values required for significance. This site is a good starting point:
    http://nitro.biosci.arizona.edu/workshops/aarhus2006/pdfs/Multiple.pdf
    For examples from biotechnology, see:
    http://www.nature.com/nbt/journal/v27/n12/full/nbt1209-1135.html
    Wikipedia gives quite a detailed treatment, and links to resampling/bootstrapping methods which are computationally intensive but can be used to derive exact p-values:
    http://en.wikipedia.org/wiki/Multiple_comparisons

    If a corrected p-value makes your cherished result non-significant, but you are reluctant to accept it is just a chance finding, you could treat your study as hypothesis-generating, and collect new data to test the association between handedness and maths as an a priori hypothesis. That would be the only way to give confidence in your result.

    ReplyDelete
  7. Thanks for the nice post, I've been a regular reader of you blog for a while now and have really enjoyed your writings.

    I've been reading about this problem for many times and I have been told about it in stats classes but I don't think I really get it (even though I have graduated with a degree in psychology). And I would really like to grasp the idea. And since people are saying that your example is a great illustration of the problem I was hoping if you had time to elaborate on it? I can understand everything what you say until:

    "In fact, on average, across numerous runs, the average number of significant values is going to be one. That’s what p < .05 means! If this doesn’t convince you of the importance of specifying your hypothesis in advance, rather than selecting data for analysis post hoc, nothing will."

    I think Im missing a crucial part of the puzzle to understand what just happened there. Since I don't know what happened it's hard to explain exactly what I don't get. If you can provide a reference or make me understand what's the idea with a further explanation I promise to make at least 3 other people to understand it as well and make them teach it to another 3 people etc etc

    ReplyDelete
  8. Hi Anonymous. Thanks for the comment. I realise what I said was unclear, and have modified it now. The key point is that in this example there are 20 variables. For each variable, the random fluctuations lead to differences in the means of the two groups. The p-value tells you how likely it is that a particular size of mean difference will arise in two samples when there really is no difference between groups. Suppose that for one variable we have a largish difference, D, between the two groups. If p < .05, then this tells us that if we took one hundred random samples, then five of them would get a difference at least as big as D, just because of chance fluctuations. And if we take twenty random samples, then 1/20 = .05 of them would be expected to show a difference as big as D.
    It's worth playing around with the simulation - try adding more columns and look at how the p-values change as the mean differences change - and look at how many p-values are 'significant'. I'll continue to think about this and aim to come up with something better, but hope meanwhile that this is useful.
    In addition, you might find that this video helps: http://www.youtube.com/watch?v=ZFXy_UdlQJg

    ReplyDelete
  9. Thanks for the clarification. Let's see if I got it right (you can use me as a case study for stats learning difficulties among university graduates): if we use the 0.05 value there is always a chance (1/20) that one of those 20 variables will show significance in the simulation and hence we should say beforehand what we are looking for?

    ReplyDelete
  10. Anonymous: Yup. That's it. Does it make sense to you?

    ReplyDelete
  11. It does, mostly. It makes sense that there is always that 1/20 chance that your results are not significant after all.

    The following shows how a non-statistical mindset works (at least in my case): You set twenty variables (cognitive measures) that you are interested in relation to handedness and only one of them seem to show a difference between the left and right handed people. Since you tried to find differences in cognitive performance between left and right handed individuals you probably expected to find the measures (maybe not in all but in most of them) to be different between these two groups. And you did not find an overall difference but you did find a difference in just one cognitive measure.

    So, if one of the measures shows a difference then isn't this already according to what you tried to find - i.e. difference in cognitive performance between left and right handed people? Would it not be wrong not say that there is no difference if that is what you initially wanted to show and that was what you found? Or should you just say that there is no overall cognitive difference between left and right handed individuals, and then test them again with the same test and set a hypothesis that we are expecting a difference in mathematical performance depending on participants handedness?

    You wrote "...and submit it to a learned journal, not mentioning the other 19 tests. After all, they weren’t very interesting. Sounds OK? Well, it isn’t...". Is the problem in this case the fact that the person would not report the other 19 cases where there were no difference between the groups? Had s/he done so would it be a statistically relevant finding? I am not trying split hairs or to be irritatingly idiotic but these are just some of the questions that pop in to my mind.

    I might be showing how elementary my understanding of probabilities are but maybe you can gain some insights from this how to teach stats for beginners.

    ReplyDelete
  12. Hi
    It's so cool to see you referring people to our video "Understanding the p-value". It has been very well received. We have now remade it (with our own artwork) and updated. It forms part of a new iPad app called AtMyPace: Statistics, with accompanying quizzes to help consolidate learning.

    http://www.youtube.com/watch?v=eyknGvncKLw
    You can see a video about the app at:

    http://www.youtube.com/watch?v=S5XJaPlujjs
    Hope you like them.

    ReplyDelete
  13. HI Dorothy just seen this nice blog

    we run a practical (using SPSS but thats not important) using synthetic data to illustrate regression equations and effect of noisy data.

    In excel it would go something like A1..A10: 1..10 B1..B10 = A1 * 3 (etc) which is the perfect line R=1. We then add noise to the data eg C1 = B1 + NORMINV(rand(), 1 , 3) then similar in D1 = B1 + NORMINV(rand(), 1 , 5). We end up with a few data sets which we then see how good the program (SPSS/EXCEL ) is at retrieving the relationship (y= 3x + 0 in this case) via the regression equation.

    We do similar for t-test etc as well to see the contribution of noise/error and the ratios,

    Cheers

    ReplyDelete
  14. I've just made a google sheet version of this http://goo.gl/xyp7Kg main advantage is it's easy to distribute (as it defaults to ask if you want to create a 'copy' when you access). Thanks!

    ReplyDelete