tag:blogger.com,1999:blog-5841910768079015534.post1368162351603170398..comments2018-10-16T13:38:54.034+01:00Comments on BishopBlog: Using simulations to understand p-valuesdeevybeehttp://www.blogger.com/profile/15118040887173718391noreply@blogger.comBlogger8125tag:blogger.com,1999:blog-5841910768079015534.post-12785305851354523942017-12-30T13:12:31.290+00:002017-12-30T13:12:31.290+00:00Please don't be unhappy. The bit that triggere...Please don't be unhappy. The bit that triggered my comment was " false positives (type I errors)". That sounds superficially like an example of transposing the conditional. I guess the problem is that"false positive" is used in two different senses. I would maintain that it's use as a synonym for type 1 error rate is not the relevant one for the interpretation of tests of significance.<br /><br />One thing that I have learned is that when discussing these matters one has to be obsessional about the use of words :-)<br /><br />I realise too, that this topic wasn't the main pint of this post. I'm looking forward to the next one.David Colquhounhttps://www.blogger.com/profile/07610935223901133825noreply@blogger.comtag:blogger.com,1999:blog-5841910768079015534.post-60976284836879980962017-12-28T07:47:27.468+00:002017-12-28T07:47:27.468+00:00I am unhappy with the statement that my use of ter...I am unhappy with the statement that my use of terminology is 'potentially misleading'. I define 'type I error' and 'false positive' in a standard fashion - anyone who is confused by David's statement can see, for instance, this explainer by stats guru Doug Altman http://www.equator-network.org/wp-content/uploads/2014/11/Sample_size_calculation_Doug_Altman.pdf<br /><br />Although he appears to be criticising inaccurate use of terminology, it seems David is actually objecting to the topic of the blogpost (i.e. how the p-value is determined from the null hypothesis) - because he deems this approach misleading. Anyone who reads my post carefully should be able to see that I explained that the conventional use of p-values was misunderstood by many and I explicitly noted that it does not tell you the probability that your result is a false positive - which is the focus of David's concern.<br /><br />These are indeed separate matters and the latter is one I aim to cover in a later post.<br />deevybeehttps://www.blogger.com/profile/15118040887173718391noreply@blogger.comtag:blogger.com,1999:blog-5841910768079015534.post-76813234642386220862017-12-27T19:41:39.940+00:002017-12-27T19:41:39.940+00:00You certainly can't say anything much about th...You certainly can't say anything much about the probability of H0 without using Bayes' theorem. One problem is that there is no single "full-blown Bayesian analysis". There's an infinitude of them. Nevertheless it seems to me to be folly to ignore the prior probability, despite the fact that you don't know it. My version is perhaps the simplest Bayesian approach, but it gives results that are quite close to at least two fancier approaches with a lot less maths.David Colquhounhttps://www.blogger.com/profile/07610935223901133825noreply@blogger.comtag:blogger.com,1999:blog-5841910768079015534.post-3656169929181953242017-12-27T19:18:10.278+00:002017-12-27T19:18:10.278+00:00Thanks Dorothy. I didn't realise that there wa...Thanks Dorothy. I didn't realise that there was a limit on links.<br /><br />I think that is potentially misleading to refer to Type 1 errors as false positives. I say that because what really matters is the probability that your result is a false positive, your false positive risk. In order to get this you need to know the total number of positives, false and true, as illustrated in Fig 2 of my 2014 paper. I say that the false positive risk is what people really want because, sadly, most users still seem to think it is what the p-value tells you. <br /><br />I suggest that better ways to put the result are as follows. Using your numbers, p = 0.027, n = 80 and d = 0.3, you can calculate that you'd have to be 77% sure that there was a real effect before you did the experiment in order to achieve a false positive risk of 0.05. An alternative way of putting this is to note that your minimum false positive risk would be 15% (that's for prior odds of 1). These numbers certainly show the weakness of the evidence provided by p = 0.027.<br /><br />The assumptions behind these calculations are given in my 2017 paper (thanks for linking to that).<br /><br />Despite your best efforts, it's disappointing that few people seem to download R scripts. So we made a web calculator which makes it easy to get the numbers that I cite above (you can even do it on your phone) <br />The web calculator is at: http://fpr-calc.ucl.ac.uk/David Colquhounhttps://www.blogger.com/profile/07610935223901133825noreply@blogger.comtag:blogger.com,1999:blog-5841910768079015534.post-31882457762242163872017-12-27T17:13:52.394+00:002017-12-27T17:13:52.394+00:00Thank you. Taking that course has been on my wish ...Thank you. Taking that course has been on my wish list for a while... LBokeriahttps://www.blogger.com/profile/17028342971289526857noreply@blogger.comtag:blogger.com,1999:blog-5841910768079015534.post-63369213097813478712017-12-27T17:06:37.477+00:002017-12-27T17:06:37.477+00:00Here is David Colquhoun's paper on the topic: ...Here is David Colquhoun's paper on the topic: http://rsos.royalsocietypublishing.org/content/4/12/171085.article-info<br />He also had additional comments, which I hope he will be able to add if the weblinks are omitted.deevybeehttps://www.blogger.com/profile/15118040887173718391noreply@blogger.comtag:blogger.com,1999:blog-5841910768079015534.post-39326134485656766522017-12-27T17:01:10.804+00:002017-12-27T17:01:10.804+00:00I can recommend the free Coursera course by Daniel...I can recommend the free Coursera course by Daniel Lakens 'Improving Your Statistical Inferences' for a very clear explanation. <br />In addition, David Colquhoun has much to say on this matter and has been attempting to post a comment here, which Blogger rejected because it had several links. I will add details of that as well.deevybeehttps://www.blogger.com/profile/15118040887173718391noreply@blogger.comtag:blogger.com,1999:blog-5841910768079015534.post-32752622598789458302017-12-26T09:28:01.280+00:002017-12-26T09:28:01.280+00:00"The most common mistake is to think that the..."The most common mistake is to think that the p-value tells you how likely the null hypothesis is given the evidence."<br /><br />This is like a stab in the heart, realizing that Im making this mistake all the time...<br /><br />So to tell how likely it is the H0 given the evidence we'd need full blown bayesian analysis with prior and everything? Or is there another way?LBokeriahttps://www.blogger.com/profile/17028342971289526857noreply@blogger.com