tag:blogger.com,1999:blog-5841910768079015534.post3923487351362755425..comments2018-07-21T19:04:26.065+01:00Comments on BishopBlog: The joys of inventing datadeevybeehttp://www.blogger.com/profile/15118040887173718391noreply@blogger.comBlogger14125tag:blogger.com,1999:blog-5841910768079015534.post-39913124524275258092016-09-01T02:49:39.485+01:002016-09-01T02:49:39.485+01:00I've just made a google sheet version of this ...I've just made a google sheet version of this http://goo.gl/xyp7Kg main advantage is it's easy to distribute (as it defaults to ask if you want to create a 'copy' when you access). Thanks! sjgknighthttps://www.blogger.com/profile/09375242314956420073noreply@blogger.comtag:blogger.com,1999:blog-5841910768079015534.post-4750652566400380992012-06-22T14:36:47.233+01:002012-06-22T14:36:47.233+01:00HI Dorothy just seen this nice blog
we run a pra...HI Dorothy just seen this nice blog <br /><br />we run a practical (using SPSS but thats not important) using synthetic data to illustrate regression equations and effect of noisy data. <br /><br />In excel it would go something like A1..A10: 1..10 B1..B10 = A1 * 3 (etc) which is the perfect line R=1. We then add noise to the data eg C1 = B1 + NORMINV(rand(), 1 , 3) then similar in D1 = B1 + NORMINV(rand(), 1 , 5). We end up with a few data sets which we then see how good the program (SPSS/EXCEL ) is at retrieving the relationship (y= 3x + 0 in this case) via the regression equation.<br /><br />We do similar for t-test etc as well to see the contribution of noise/error and the ratios,<br /><br />CheersRob Stonehttps://www.blogger.com/profile/15025162005105081048noreply@blogger.comtag:blogger.com,1999:blog-5841910768079015534.post-56424239203034834242011-12-01T02:11:45.230+00:002011-12-01T02:11:45.230+00:00Hi
It's so cool to see you referring people to...Hi<br />It's so cool to see you referring people to our video "Understanding the p-value". It has been very well received. We have now remade it (with our own artwork) and updated. It forms part of a new iPad app called AtMyPace: Statistics, with accompanying quizzes to help consolidate learning.<br /><br />http://www.youtube.com/watch?v=eyknGvncKLw<br />You can see a video about the app at:<br /><br />http://www.youtube.com/watch?v=S5XJaPlujjs<br />Hope you like them.Nichttps://www.blogger.com/profile/00755191710567113239noreply@blogger.comtag:blogger.com,1999:blog-5841910768079015534.post-90201873695994869962011-11-15T21:01:15.660+00:002011-11-15T21:01:15.660+00:00It does, mostly. It makes sense that there is alwa...It does, mostly. It makes sense that there is always that 1/20 chance that your results are not significant after all. <br /><br />The following shows how a non-statistical mindset works (at least in my case): You set twenty variables (cognitive measures) that you are interested in relation to handedness and only one of them seem to show a difference between the left and right handed people. Since you tried to find differences in cognitive performance between left and right handed individuals you probably expected to find the measures (maybe not in all but in most of them) to be different between these two groups. And you did not find an overall difference but you did find a difference in just one cognitive measure. <br /><br />So, if one of the measures shows a difference then isn't this already according to what you tried to find - i.e. difference in cognitive performance between left and right handed people? Would it not be wrong not say that there is no difference if that is what you initially wanted to show and that was what you found? Or should you just say that there is no overall cognitive difference between left and right handed individuals, and then test them again with the same test and set a hypothesis that we are expecting a difference in mathematical performance depending on participants handedness?<br /><br />You wrote "...and submit it to a learned journal, not mentioning the other 19 tests. After all, they weren’t very interesting. Sounds OK? Well, it isn’t...". Is the problem in this case the fact that the person would not report the other 19 cases where there were no difference between the groups? Had s/he done so would it be a statistically relevant finding? I am not trying split hairs or to be irritatingly idiotic but these are just some of the questions that pop in to my mind. <br /><br />I might be showing how elementary my understanding of probabilities are but maybe you can gain some insights from this how to teach stats for beginners.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-5841910768079015534.post-76213102702408208122011-11-15T12:21:14.568+00:002011-11-15T12:21:14.568+00:00Anonymous: Yup. That's it. Does it make sense ...Anonymous: Yup. That's it. Does it make sense to you?deevybeehttps://www.blogger.com/profile/15118040887173718391noreply@blogger.comtag:blogger.com,1999:blog-5841910768079015534.post-44495369890177401532011-11-14T23:47:52.232+00:002011-11-14T23:47:52.232+00:00Thanks for the clarification. Let's see if I g...Thanks for the clarification. Let's see if I got it right (you can use me as a case study for stats learning difficulties among university graduates): if we use the 0.05 value there is always a chance (1/20) that one of those 20 variables will show significance in the simulation and hence we should say beforehand what we are looking for?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-5841910768079015534.post-17675598127707411242011-11-13T17:45:24.217+00:002011-11-13T17:45:24.217+00:00Hi Anonymous. Thanks for the comment. I realise wh...Hi Anonymous. Thanks for the comment. I realise what I said was unclear, and have modified it now. The key point is that in this example there are 20 variables. For each variable, the random fluctuations lead to differences in the means of the two groups. The p-value tells you how likely it is that a particular size of mean difference will arise in two samples when there really is no difference between groups. Suppose that for one variable we have a largish difference, D, between the two groups. If p < .05, then this tells us that if we took one hundred random samples, then five of them would get a difference at least as big as D, just because of chance fluctuations. And if we take twenty random samples, then 1/20 = .05 of them would be expected to show a difference as big as D. <br />It's worth playing around with the simulation - try adding more columns and look at how the p-values change as the mean differences change - and look at how many p-values are 'significant'. I'll continue to think about this and aim to come up with something better, but hope meanwhile that this is useful.<br />In addition, you might find that this video helps: http://www.youtube.com/watch?v=ZFXy_UdlQJgdeevybeehttps://www.blogger.com/profile/15118040887173718391noreply@blogger.comtag:blogger.com,1999:blog-5841910768079015534.post-43405045608469994662011-11-13T12:41:46.636+00:002011-11-13T12:41:46.636+00:00Thanks for the nice post, I've been a regular ...Thanks for the nice post, I've been a regular reader of you blog for a while now and have really enjoyed your writings. <br /><br />I've been reading about this problem for many times and I have been told about it in stats classes but I don't think I really get it (even though I have graduated with a degree in psychology). And I would really like to grasp the idea. And since people are saying that your example is a great illustration of the problem I was hoping if you had time to elaborate on it? I can understand everything what you say until: <br /><br />"In fact, on average, across numerous runs, the average number of significant values is going to be one. That’s what p < .05 means! If this doesn’t convince you of the importance of specifying your hypothesis in advance, rather than selecting data for analysis post hoc, nothing will."<br /><br />I think Im missing a crucial part of the puzzle to understand what just happened there. Since I don't know what happened it's hard to explain exactly what I don't get. If you can provide a reference or make me understand what's the idea with a further explanation I promise to make at least 3 other people to understand it as well and make them teach it to another 3 people etc etcAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-5841910768079015534.post-65929936152851327232011-10-08T18:10:21.722+01:002011-10-08T18:10:21.722+01:00Thanks to Anonymous for pointing out error in form...Thanks to Anonymous for pointing out error in formula: now corrected. Wiping egg from face.<br /><br />JJ: Most stats books advise on how to deal with multiple comparisons, but best solution depends on your research design and exactly what hypothesis you are testing. A common approach is simply to adjust p-values required for significance. This site is a good starting point:<br />http://nitro.biosci.arizona.edu/workshops/aarhus2006/pdfs/Multiple.pdf<br />For examples from biotechnology, see:<br />http://www.nature.com/nbt/journal/v27/n12/full/nbt1209-1135.html<br />Wikipedia gives quite a detailed treatment, and links to resampling/bootstrapping methods which are computationally intensive but can be used to derive exact p-values:<br />http://en.wikipedia.org/wiki/Multiple_comparisons<br /><br />If a corrected p-value makes your cherished result non-significant, but you are reluctant to accept it is just a chance finding, you could treat your study as hypothesis-generating, and collect new data to test the association between handedness and maths as an a priori hypothesis. That would be the only way to give confidence in your result.deevybeehttps://www.blogger.com/profile/15118040887173718391noreply@blogger.comtag:blogger.com,1999:blog-5841910768079015534.post-84646717724282057052011-10-08T03:46:35.182+01:002011-10-08T03:46:35.182+01:00TTEST(B2:B11,B12:B22,2,2)TTEST(B2:B11,B12:B22,2,2)Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-5841910768079015534.post-43559200275014383652011-10-06T20:22:28.979+01:002011-10-06T20:22:28.979+01:00Very nice posts but you don't provide the solu...Very nice posts but you don't provide the solution to this problem<br /><br />I guess that the solution is either to correct for multiple comparisons (looking for p-values smaller than 0.05/20) or to look for an interaction between tests (20 factors) and groups (left-handed and right handed people). So what's the best solution?<br /><br />Also, I really enjoyed the paper form Niewenhuis and colleagues in Nature Neuroscience about how neuroscientists failed to look for interactions in their data. (Nieuwenhuis, S. et al. 2011: doi:10.1038/nn.2886)JJhttp://jjodx.weebly.comnoreply@blogger.comtag:blogger.com,1999:blog-5841910768079015534.post-28585405646122546892011-10-06T08:05:59.339+01:002011-10-06T08:05:59.339+01:00Excel is pretty good, and the advantage is that ev...Excel is pretty good, and the advantage is that everyone knows how to use it.<br /><br />R or MATLAB are clearly more powerful but I think, when teaching students, what you want is for them to actually generate the data themselves - so they can see it really is random, and how easy it is for random data to produce significant results.Neuroskeptichttps://www.blogger.com/profile/06647064768789308157noreply@blogger.comtag:blogger.com,1999:blog-5841910768079015534.post-59255890112521117382011-10-05T13:51:07.112+01:002011-10-05T13:51:07.112+01:00This is sheer brilliance. I hope you will forgive ...This is sheer brilliance. I hope you will forgive me using this in teaching (properly attributed of course). Thanks!Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-5841910768079015534.post-58906687780912119782011-10-05T09:27:52.525+01:002011-10-05T09:27:52.525+01:00Wonderful post Dorothy, thanks - never thought of ...Wonderful post Dorothy, thanks - never thought of using Excel like that! Will definitely use some of this in my statistics teaching.Anonymousnoreply@blogger.com