In
a previous blogpost, I criticised a recent paper claiming that playing action
video games improved reading in dyslexics. In a series of comments below the
blogpost, two of the authors, Andrea Facoetti and Simone Gori, have responded
to my criticisms. I thank them for taking the trouble to spell out their views
and giving readers the opportunity to see another point of view. I am, however,
not persuaded by their arguments, which make two main points. First, that their
study was not methodologically weak and so Current Biology was right to publish
it, and second, that it is unfair, and indeed unethical, to criticise a
scientific paper in a blog, rather than through the regular scientific
channels.
Regarding the study
methodology, as noted above, the principal problem with the study by
Franceschini et al was that it was underpowered, with just 10 participants per
group. The authors reply with an
argument ad populum, i.e. many other studies have used equally small samples.
This is undoubtedly true, but it doesn’t make it right. They dismiss the paper
I cited by Christley (2010) on the grounds that it was published in a low
impact journal. But the serious drawbacks of underpowered studies have been
known about for years, and written about in high- as well as low-impact
journals (see references below).
The response by Facoetti
and Gori illustrates the problem I had highlighted. In effect, they are saying
that we should believe their result because it appeared in a high-impact
journal, and now that it is published, the onus must be on other people to
demonstrate that it is wrong. I can appreciate that it must be deeply
irritating for them to have me expressing doubt about the replicability of
their result, given that their paper passed peer review in a major journal and
the results reach conventional levels of statistical significance. But in the
field of clinical trials, the non-replicability of large initial effects from
small trials has been demonstrated on numerous occasions, using empirical data
- see in particular the work of Ioannidis, referenced below. The reasons for
this ‘winner’s curse’ have been much discussed, but its reality is not in
doubt. This is why I maintain that the paper would not have been published if
it had been reviewed by scientists who had expertise in clinical trials
methodology. They would have demanded more evidence than this.
The response by the
authors highlights another issue: now that the paper has been published, the
expectation is that anyone who has doubts, such as me, should be responsible
for checking the veracity of the findings. As we say in Britain, I should put
up or shut up. Indeed, I could try to get a research grant to do a further
study. However, I would probably not be allowed by my local ethics committee to
do one on such a small sample and it might take a year or so to do, and would
distract me from my other research. Given that I have reservations about the
likelihood of a positive result, this is not an attractive option. My view is
that journal editors should have recognised this as a pilot study and asked the
authors to do a more extensive replication, rather than dashing into print on
the basis of such slender evidence. In publishing this study, Current Biology
has created a situation where other scientists must now spend time and
resources to establish whether the results hold up.
To establish just how
damaging this can be, consider the case of the FastForword intervention,
developed on the basis of a small trial initially reported in Science in 1996.
After the Science paper, the authors went directly into commercialization of
the intervention, and reported only uncontrolled trials. It took until 2010 for
there to be enough reasonably-sized independent randomized controlled trials to
evaluate the intervention properly in a meta-analysis, at which point it was
concluded that it had no beneficial effect. By this time, tens of thousands of
children had been through the intervention, and hundreds of thousands of
research dollars had been spent on studies evaluating FastForword.
I appreciate that those
reporting exciting findings from small trials are motivated by the best of
intentions – to tell the world about something that seems to help children. But
the reality is that, if the initial trial is not adequately powered, it can be
detrimental both to science and to the children it is designed to help, by
giving such an imprecise and uncertain estimate of the effectiveness of
treatment.
Finally, a comment on
whether it is fair to comment on a research article in a blog, rather than
going through the usual procedure of submitting an article to a journal and
having it peer-reviewed prior to publication. The authors’ reactions to my
blogpost are reminiscent of Felicia Wolfe-Simon’s response to blog-based
criticisms of a paper she published in Science: "The items you are
presenting do not represent the proper way to engage in a scientific
discourse”. Unlike Wolfe-Simon, who simply refused to engage with bloggers,
Facoetti and Gori show willingness to discuss matters further, and present
their side of the story, but they nevertheless it is clear they do not regard a
blog as an appropriate place to debate scientific studies.
I could not disagree
more. As was readily demonstrated in the Wolfe-Simon case, what has come to be
known as ‘post-publication peer review’ via the blogosphere can allow for new
research to be rapidly discussed and debated in a way that would be quite
impossible via traditional journal publishing. In addition, it brings the
debate to the attention of a much wider readership. Facoetti and Gori feel I
have picked on them unfairly: in fact, I found out about their paper because I
was asked for my opinion by practitioners who worked with dyslexic children.
They felt the results from the Current Biology study sounded too good to be
true, but they could not access the paper from behind its paywall, and in any
case they felt unable to evaluate it properly. I don’t enjoy criticising
colleagues, but I feel that it is entirely proper for me to put my opinion out
in the public domain, so that this broader readership can hear a different
perspective from those put out in the press releases. And the value of blogging
is that it does allow for immediate reaction, both positive and negative. I
don’t censor comments, provided they are polite and on-topic, so my readers
have the opportunity to read the reaction of Facoetti and Gori.
I should emphasise that I
do not have any personal axe to grind with the study's authors, who I do not
know personally. I’d be happy to revise my opinion if convincing arguments are
put forward, but I think it is important that this discussion takes place in
the public domain, because the issues it raises go well beyond this specific
study.
References
Button, K. S., Ioannidis,
J. P. A., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S. J., &
Munafo, M. R. (2013). Power failure: why small sample size undermines the
reliability of neuroscience. Nature Reviews Neuroscience, advance online publication.
doi: 10.1038/nrn3475
Ioannidis, J. P. A. (2005).
Why most published research findings are false. PLoS Medicine, 2(8), e124. doi:
10.1371/journal.pmed.0020124
Ioannidis, J. P. (2008).
Why most discovered true associations are inflated. Epidemiology 19(5),
640-648.
Ioannidis JP, Pereira TV,
& Horwitz RI (2013). Emergence of large treatment effects from small
trials--reply. JAMA : the journal of the American Medical Association, 309 (8),
768-9 PMID: 23443435
Blogging as post-pub peer review is entirely appropriate. I'm guessing their concern is about the lack of peer review, but given that no one will mistake a blog post for a journal article I can't say I'm concerned. If you don't want people to critique your work, then don't publish the work.
ReplyDeleteBlogging isn't a replacement for peer reviewed commentary, of course, but it can serve an invaluable role in focusing the discussion past simple misunderstandings. I saw a great (if lower stakes) example of this a while back. Thomas Schenk is in a debate with Mel Goodale and David Milner about whether their classic patient DF really is evidence for their two visual stream hypothesis. The debate is grinding on slowly in the literature with point and counterpoint papers, mostly talking about misconceptions each has about the last paper of the other one.
I blogged one of Schenk's papers because I thought it was interesting and that this whole question is fascinating. A commenter had a couple of questions about something Schenk was saying that I couldn't answer, so I emailed him the post and we resolved the confusion in about 5 minutes.
It occurred to me than that if these guys went round and round a few times on a blog post, in public where we could all see, and then published their more complete thoughts, the literature would be a much better place, and much sooner too.
I think capitalising on chance and fishing for significance are despicable, especially in the clinical domain. But I can't help but feel a bit sad.
ReplyDeleteThey are really proud of their work and do not think they have anything to hide, they would even take you up on your bet.
I think is due to a blissful unawareness of all the problems surrounding small samples, researcher degrees of freedom and so on. Others, I think, seeking to commercialise an intervention as soon as possible (e.g. Cogmed) knowingly exploit these problems with science and surely deserve a public lashing more.
But yeah, truth has to be told, even if the people who messed up a little are nice and did so unknowingly.
Andrew - thanks for that example. I feel 'debates' in journals never work, whereas in the blogosphere they are great - you have more time to think than you would in a live debate, but without the inordinate delays involved in peer-reviewed journals. Mistakes get corrected, or disagreements get clarified, far faster than by any other process.
ReplyDeleteBut Anon highlights the downside of all of this - the potential for interpersonal grief. I stress I don't want to "trash" or "bash" the authors personally, but I do think there are problems with their paper.
In fact, I am starting to realise why more 'official' forms of post-publication peer review have not taken off . Nobody likes to say anything for fear of causing upset. But, as you point out, at the end of the day, the key thing is getting the science right, and we aren't likely to do that unless we say what we really think - and not only in the context of anonymised peer review.
My central concern is with the issues raised by Ioannidis in several of his papers (see refs)- the scientific process seems highly dysfunctional right now, and I think we have to find ways to mend it. And that does mean taking replication more seriously.
It's even worse than reticence by other researchers: In a case I know, Science avoided publishing a prompt experimental rebuttal showing a central methodological flaw of a paper it had published (Science "timed it out" in review). So instead the rebuttal had to appear in PNAS. So far as I know, no correction to the Science paper was published.
DeleteThere is a website that allows anonymous post-publication peer review, but restricts it to commentators who have themselves published peer reviewed work. The site is pubpeer.com and already it seems to be taking off particularly in neuroscience.
Deletedeevybee said: "In fact, I am starting to realise why more 'official' forms of post-publication peer review have not taken off."
ReplyDeleteUntil whoever is in charge offers a better alternative, blogs and tweets are fabulous tools to accelerate discourse.
And: "But, as you point out, at the end of the day, the key thing is getting the science right..."
If this isn't the single most important point then we should all be questioning our personal motivations and assessing our biases and conflicts of interest.
If something is incorrect then it doesn't matter if it's published in a peer-reviewed journal, in a TV documentary or tweeted. Wrong is wrong. The medium has no intrinsic ability to moderate the validity of a result. But - and this is a biggie - the medium does have the intrinsic ability to determine the speed with which the result is disseminated, and perhaps corrected.
Let's say, entirely hypothetically at the moment, that I find a flaw in the way motion correction is handled in fMRI. Hands up who wants to wait 12-18 months to find out about the problem via a peer-reviewed paper? And hands up who prefers that I blog it and tweet an announcement so that you can know before breakfast tomorrow morning?
Whether post-publication peer review (PPPR) benefits science depends on the blog and the blogger. Your blog happens to be very good, because your criticisms are very good, and your expertise in data analysis is very good. The problem with PPPR is that some prominent blogs and bloggers are not as good, and when they overstep their expertise, or perform hasty or ill-considered analyses, they damage the field just as a badly as those who publish bad studies. Worse yet, in the case of anonymous bloggers, the writer has virtually nothing to lose in blurting out sensational criticism and 'debunking' which may or may not be valid.
ReplyDeletePPPR needs another R: post publication peer review review. One might say the comments sections on blogs are exactly this. But they appear at the bottom of the page, are often hidden until a button is clicked, and can be deleted. In any case the blog owner has a clear 'home field advantage' -- the sole author and editor of their own open access journal which will be read, shared and retweeted by lay-people who can't tell a good debunking from a bad one.
So I say to those influential bloggers who are smart, careful and comment on topics or methods with which they are familiar: blog away. Otherwise, please keep your 'thoughts' to yourself until you're ready to publish them in a peer reviewed journal.
If critical discussion on blogs is inappropriate then is it inappropriate by skype, email, phone or face-to-face?
ReplyDeleteBlog on!
The watchers sorely need watching.
Publish papers and grow thicker skin. Absolutely ridiculous to criticize discussion of published research in non-peer reviewed forums.
ReplyDeleteBook reviewers don't criticize books by writing books.
Film critics don't criticize films by making films.
Critiques of research papers can take place outside of peer review. It's been happening over coffee and at conferences for decades.
Thank you for bringing this up. I find it quite interesting that the authors find blogging offensive. Debate is the basis of good science, as is replication. In this time and age it's rather difficult to replicate studies so we have to be the devil's advocate in terms of the methodology used. As Andrew clearly states, if you don't want your work to be criticized, don't publish.
ReplyDeleteIt also touches a very important point, that many scientists really think that publishing in a high impact factor journal makes their claims true and untouchable from the outside world (or people who haven't published in Nature or Science). This is very dangerous for science and the work environment of scientists. As you say, anyone from the clinical sciences would find the sample quite small, specially if you're used to have >100 subjects per study.
This whole discussion is a good example for grad students. Blogging and tweeting is part of the future of science. Open source and open discussion as well.
We've seen plenty of evidence of "if it's published in a high impact journal then it's true" reasoning. Andrew Wakefield in the Lancet, anyone?
ReplyDeleteOnce something is past peer-review it is notoriously difficult for someone who is not one of the authors to criticize it, let alone get the paper retracted in the case of evident falsehoods.
This is not only a problem of small sample sizes, this is a problem of peer-reviewed journal publishing in general.
An excellent, nuanced post Dorothy - thanks for this. I couldn't agree more that blogging is an entirely appropriate form of post-publication peer review. The hope would be that someone, somewhere, picks up on the post, notes any potential flaws in the study, then sees fit to improve upon the methods in a way that would benefit the field (i.e. it's not a matter of 'bashing' a paper).
ReplyDeleteOne other minor comment about the paper regards the use of video games themselves. The authors cite various (excellent) papers by Bavelier and colleagues regarding transfer of perceptual learning from game playing to other areas. However, as EJ Wagenmakers notes in this paper (http://www.ejwagenmakers.com/submitted/DonGaming.pdf), this is actually a highly surprising effect - normally we only see context-specific perceptual learning effects. As such, the jury's out at the moment as to whether playing action video games does indeed confer benefits in other domains. We need much more solid evidence before assuming that they do, in the case of the present paper you discuss.
This webpage contains links on all sides of the brain training generalizability issue. It includes links to quite a few failures to replicate Bavelier's research.
Deletehttp://psychfiledrawer.org/topics/view.php?t=brain-training--far-transfer-effects-643-763
This whole "it's published in XYZ journal so now it's true" has been shown over and over again (see also previous comments) to be factually wrong and statements like these actually disqualify the person making that statement as obviously ill-informed (i.e., incompetent on basic scientific procedure).
ReplyDeleteThese sorts of efforts to silence scientific debate are very troubling as they reveal a deeply-rooted anti-scientific mentality which uses journal rank to protect the personal interests (i.e., the careers) of scientists. I would tend to see such troubling tendencies as precipitated by the precarious working conditions of scientists these days. Maybe I'm too pessimistic, but I see defenses like these as constituting further evidence that precarious working conditions, together with journal rank are providing a pernicious field of incentives which are highly counter-productive and potentially threatening the entire scientific enterprise.
Neurosilence Dogood:
ReplyDelete"Whether post-publication peer review (PPPR) benefits science depends on the blog and the blogger. Your blog happens to be very good, because your criticisms are very good, and your expertise in data analysis is very good. The problem with PPPR is that some prominent blogs and bloggers are not as good, and when they overstep their expertise, or perform hasty or ill-considered analyses, they damage the field just as a badly as those who publish bad studies."
It's true that blogs are influential, and that this can be a bad thing when the blogs are wrong. But this is rarely as damaging as a misleading paper, because if a blogger is wrong, and they're subsequently shown to be wrong, all that's happened is that some people have wasted a bit of time on the internet.
But if an experiment is flawed, all of the money and time that went into it could be wasted. and if a researcher sets out to build on some flawed work, that'll be wasted too.
"PPPR needs another R: post publication peer review review. One might say the comments sections on blogs are exactly this. But they appear at the bottom of the page, are often hidden until a button is clicked, and can be deleted. In any case the blog owner has a clear 'home field advantage' -- the sole author and editor of their own open access journal which will be read, shared and retweeted by lay-people who can't tell a good debunking from a bad one."
What you're calling for already exists; it's a blog. Many science blogs start out just as you describe - as someone who thinks they can do it better than existing blogs.
Neuroskeptic said:
ReplyDelete"It's true that blogs are influential, and that this can be a bad thing when the blogs are wrong. But this is rarely as damaging as a misleading paper, because if a blogger is wrong, and they're subsequently shown to be wrong, all that's happened is that some people have wasted a bit of time on the internet.
And you can bet that the speed with which the blogger is corrected will approach a significant fraction of c. So unless the blogger - unscrupulously, imho - blocks the correcting comments or otherwise twists the scenario, the damage will have very limited duration, this being the beauty of the online forum as you suggest. (Charlatans - bloggers or not - are a whole separate subject. Neurobollocks and NeuroLogica blogs are doing excellent work on debunking and skepticism, respectively.)
But if an experiment is flawed, all of the money and time that went into it could be wasted. and if a researcher sets out to build on some flawed work, that'll be wasted too.
Yes!!! This is what gets my goat about traditional publications. Why must I wait months or years to find out that what I am doing has a flaw that is known by someone else? Is that not immoral? It is most definitely wasteful. Nothing is more likely to raise my blood pressure than finding out much later than I could that I have been urinating to windward. "Oh yeah, we ran into that problem, too. About a year or two back." Huh?
Unrelated tip, folks: If you want to italicize a quote in a response just use the HTML anchors i (start) and /i (stop) each encased in < and >. I can't write them or blogger will interpret them, like this. And I don't know what the escape character is to get them printed explicitly. (Anyone?)
ReplyDeleteI couldn't agree more with what you write here, and with your comments on the original paper. The more this is said the better. And defending blogging for this purpose should slowly become unnecessary. I think it speaks for the authors that they have posted comments on your initial blog, but methodologically I totally share your view that their study was flawed. More power to you and your pen!!
ReplyDeletePost publication peer review needs to be centralized and anonymous in order to be effective. Centralized so I don't have to search a million blogs to find all the comments on a paper and anonymous so I don't fear retribution for saying something I later feel was silly. Check out PubPeer.com/recent for examples.
ReplyDeleteThanks to all for comments. I've been quiet as I feel most of you have put things far better than I can, but at last I can disagree with someone, viz Anon of 25 March.
ReplyDeleteI think there are situations when anonymity does work, exactly as you suggest, and I think it's reasonable for regular peer review. I agree with the argument that anonymity protects junior people who might otherwise risk retaliation if they criticise more powerful figures. And sometimes I am quite relieved if negative reviewer comments on my own papers are anonymised, since no doubt I will have to meet the reviewer at some point and it avoids embarrassment on both sides.
However,I have an instinctive dislike of anonymity in blogs (and comments on blogs!), and that has got me wondering why I should hold such an inconsistent position. Perhaps because the blogging interaction is so much more rapid and direct. So replying to you feels a bit like talking to someone with a sack over their head.
And there is also the point that anonymous comments can be much ruder and snarkier than others (though not in your case, i hasten to add).
Maybe it's really just that a blogger who takes it upon themselves to write a critique is in a very different position from a person who is asked to write a review by a journal editor. I do a lot of reviewing for journals out of a sense of public duty - it often feels like a rather tedious job. In contrast, if I write about a paper in a blog, it's because it has sparked off some kind of reaction, positive or negative, in me,and so I am not just doing a job of work. I have more invested in what I am saying and I reckon I should put myself on the line and not hide behind anonymity. Will be interested in what others think about this.
Re: Anonymous commenting.
DeleteUs Kids (I'm 30) are Afraid of getting Fired for Something We Said on Teh Intertubez.
Now, I have wonderful references who back me up when I say stuff- and a pretty nice pub record to boot. I'm not currently affiliated with anyone, but I'm working on that. Heh. (See http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0058332 for my new PLoS ONE about intra-operational brain tumor diagnostics.) I'm doing bio now, but I started in chem, and chem Editors take authorship quite seriously. (Although they are all a mite bit busy atm, so give them some time to .... adjust to this New Internet Order.)
Most "kids" my age are terrified of even disagreeing with The Boss (your PhD or postdoc adviser) since, in many labs, they'll just fire you for disagreeing with their hypothesis.
Phht. I went to Reed College, which does not condone this sort of thing. (See: http://www.scilogs.com/mola_mola/pretentious-moi-literary-quotes-in-science/ for a list of Quotes from my profs, one of my favs is: "When you become wedded to your hypothesis and not your data, that's a little scary." -Ronald McClard, my undergrad thesis adviser.)
But, it is a problem; and it's not just a USA one, it's global. We need protection if we're going to blow the whistle. However, it would appear some PIs are very much more wedded to their hypothesis than they are concerned with listening to and protecting their students.
However,I have an instinctive dislike of anonymity in blogs (and comments on blogs!), and that has got me wondering why I should hold such an inconsistent position. Perhaps because the blogging interaction is so much more rapid and direct. So replying to you feels a bit like talking to someone with a sack over their head.
DeleteAnd there is also the point that anonymous comments can be much ruder and snarkier than others (though not in your case, i hasten to add).
I agree with you, but I think it's a small price to pay in order to encourage everyone to participate. If everyone comments on articles and reads the comments left on articles then the "snark" can be policed by the crowd and voted down into the noise. PubPeer is showing that this works very well.
Maybe there are good arguments for anonymous post-publication peer review blogging/commenting but 'fear [of] retribution for saying something I later feel was silly' is not one of them!
ReplyDeleteTwo possibilities:
1. such fear is well grounded in fact and this indicates we have reached a point where open scientific discussion - a really basic and fundamental element of the progress of scientific thought - has become alien to our culture.
2. such fear is without real basis and you just have to think carefully before articulating your argument, accept/correct error if you make some, put comments in a respectful way, etc.
As I am a naive optimist (and because I'd have to change job if I the first was true), I chose to believe in the second possibility...
Despite thinking carefully about these statements, I would prefer that they're not attached to my name for eternity.
DeleteUnfortuntately our scientific careers depend very heavily on our reputations. I read many papers each week and have comments on most of them. I leave those comments on pubpeer.com because I think it's important for us all to share how we interpret papers. Sometimes I am simply wrong with my interpretation. My colleagues certainly won't read all of the comments I have left on pubpeer, but if they read the one(s) in which I am wrong, I could get a reputation for being wrong with a very important reviewer of my papers and grants.
I agree with raphazlab on this one.
DeleteAlso it is possible to express a viewpoint while indicating a degree of uncertainty about it, and that is the appropriate thing to do if unsure of the validity of what you are saying.
And if you get something wrong, then you should simply admit the error and post a clarification. We all make mistakes.
That's perhaps true and perhaps not the best argument I've ever made, but nevertheless, given the option, I'll choose anonymity. I think these sites should continue to allow it to encourage as much participation as possible.
DeleteI wish one of these review sites would really take off because I think it would cause a shockwave in scientific publishing. For that to happen everyone needs to start using one and I think anonymity is necessary for that to happen.
This is very simple. When something is published, it's out there in the world, and open to criticism (or indeed praise) from any quarter, in any venue, via any media. The notion that criticism should only be made in the peer-reviewed literature is entirely wrong-headed, and damaging in multiple respects:
ReplyDelete* Even at best, it is desperately slow.
* It is frequently stifled. (I've been a co-author on a rebuttal paper that the original journal wouldn't publish; it's hard not to think they didn't want to be made to look bad.)
* It typically prevents the objections from being clearly stated, requiring as it does that and editor and two or three reviewers can be persuaded to accept it.
* Right from the outset, it completely excludes laymen and women from the discussion.
* It often results in the discussion, when it eventually reaches publication, being behind a paywall that prevents people from reading it.
Really: discussing such matters on blogs is better in almost every dimension than doing so in journals.
Dear Professor Bishop,
ReplyDeletewe are again here to reply to your reply :-)
We apologize to be that slow but we are not even English native speakers then it takes time for us to write a decent reply.
Some important changes happened with your new post: The topic is radically changed and even if it is an interesting one it is far from the topic of your first post. Moreover we passed from the discussion of facts to the discussion of opinions, again interesting but way much different.
In our previous post we clearly show that your main point (the idea that our paper would be rejected on lower impact factor journals based on the sample size) was wrong, several papers with similar sample size were accepted in the same topic to several journals with different impact factors. Then the message of your post was misleading at best. We also show you that our paper was not methodologically poor providing you all the information that supports our view and also showing to you that your claim about the low power of our study was wrong or at least very questionable. We did not just replying with an argument ad populum but we showed that there were good reasons to expect a larger effect that one supposed by you, the correlation was done with an n=20 and so on, you are obviously free to ignore us but at least it would be nice if you summarize our reply correctly because it is clear that many of your readers will not spend time in reading our reply. These are the facts, now we could go in the domain of the opinions and you can start saying that our reply is not convincing without even say why (more or less this is what you already did and what you will probably do again) but is your opinion a fact? Obviously no.
We have our opinions and you have your opinions, fine for us but please don’t try to transform opinions in facts. We are sorry if we did not convinced you (really, no irony here) but you did not convinced us as well. We still respect your opinion thought.
Talking about opinion it is really interesting your opinion about science that you wrote in your last post: it is surprising at best that you say that a scientific study should open and immediately close the debate about a new discover, simply because the perfect study will be done with a big sample, this is far away from our idea of science and it is much more close to our idea of religion. In our opinion every single scientific study opens up for new questions more than giving final answers and consequently any single study requires to be validate or confuted by following studies, this is the normal process in science from the origin of science itself, and this is again completely independent from the sample size of the first study. Reading your post the idea that come out is that you may believe that if you do a study with a large sample size this study will be the truth and no further studies have to be done about that: topic closed, end of the discussion. This is, in our opinion, way far away from what science is and should be: a dynamic process full of discoveries, re-discoveries, confutations, debates and so on. Saying that a larger study would prevent to spend money for future studies on the same topic is, in our opinion, completely against what science should be. Again, just an opinion, if you or your readers want to bash us for our opinions, go ahead: it’s free.
Finally, your new post put in our mouth a sentence that we did not written anywhere but some of your readers (followers?) immediately used to bash us, it is this one: “they are saying that we should believe their result because it appeared in a high-impact journal”, we really even re-read our post in searching this sentence but… it is not there, do you think that it is fair to associate to us a sentence that we did not write? We don’t think so, honestly. We believe that the sources did not have the same relevance, but we never say that because our paper was accepted by an important journal it has to be consider the truth, the truth is a religion problem actually. We believe that the quality check of scientific journal is much better than the quality check of a blog, this is quite clear. The gain that you have in speed it is paid in quality and reliability of the source, is this price worth it? We believe that it is not. In our opinion, opinion that is now even more solid simply reading some comments on your blog, the suited arena for a scientific discussion is the peer reviewed scientific journals, we are not saying that this is the only possible opinion, be free again to bash us to think in this way, but the danger to discredit a scientific work without passing any expert control is clear for us and, in our opinion, it is not a good thing for science. The argument that in some specific cases blogs helped science is not making, by any stretch of imagination, the rule that blogs are always good for science. Also the argument that the people will not mistake the relevance of a blog post with the relevance of a scientific paper is clearly wrong, try to just read some of the posts here, several readers will not even read the original article and they will base their opinions only on your blog post. In sum, we believe that it is a great responsibility to post in a blog and to discredit a scientific article especially without solid basis, however it is simply our opinion and everybody are obviously absolutely free to do that. We also believe that the people is obviously free to criticize everything they want but saying that a study will not be replicated without having data showing that it is simply relevant as the exact opposite sentence. Only time will tell that, the relevance of a scientific discovery needs time and further evidence to be evaluated and this is the general rule in science no matter the sample size of the first study that rise the question. As we wrote before we are not willing to do a useless and virtually endless back and forward discussion here, it is also clear to us that the central topic is now radically changed, but we have nothing to hide and we always reply to everybody that asked us questions about our studies, on the contrary we are not interested to reply to people saying that our opinions are wrong because we are not pretending that our opinions are the truth, there are just our opinions :-).
ReplyDeleteA) Huge blocks of text are painful to read, try to break it up a little bit, OK?
DeleteI know it's hard for non-Native speakers trying to publish in English, most of you just want to be clear. :)
B) A lot of this conversation reflects the different ways the EU and the USA approach doing and funding long-term science. Both have good points to them, but both also have weaknesses.
(I am just back home in the USA from doing a German Postdoc and whooo boy was that .... Different.)
I'm arriving late on this debate.
ReplyDeleteMy initial reaction to Andrea's paper was much the same as Dorothy's: 1) too small sample to draw any firm conclusion; 2) while it might have been OK to publish this in a more low-key journal for the sake of reporting and attracting the attention of potential replicators, Current Biology is really guilty for not enforcing standard criteria for clinical trials and for unleashing the media.
About sample sizes, here are two more references that I find useful:
Coyne, J. C., Thombs, B. D., & Hagedoorn, M. (2010). Ain't necessarily so: review and critique of recent meta-analyses of behavioral medicine interventions in health psychology. https://dl.dropbox.com/u/23608059/aint%20necessarily%20so.pdf
Kraemer, H. C., Gardner, C., Brooks, J. O., & Yesavage, J. A. (1998). Advantages of excluding underpowered studies in meta-analysis: Inclusionist versus exclusionist viewpoints. Psychological Methods,3,23–31.
For a summary, see discussion of point 3 pp 108-109 in the Coyne paper. Basically they argue that given typical effect sizes for behavioural interventions, meta-analyses of clinical trials should not include trials with a sample size of less than 35 per cell (with 50 a more desirable threshold).
If I were to embark on a clinical trial, I would think twice if I planned to do a study that would not be included in subsequent meta-analyses by lack of adherence to a standard criterion.
This comment has been removed by the author.
ReplyDeleteAnd about whether such discussion is OK on blogs, well this is just the same as what goes on in all conferences (discussions are not peer-reviewed, so what?). The advantage is that we do not have to wait for the chance encounter of Dorothy Bishop and Andrea Facoetti to have that debate, and we can all attend and participate.
ReplyDeleteFrom the point of view of the authors, I think that this acceptable to the extent that this takes place only on a handful of blogs of reputable and thoughtful people, but not if this were on dozens of anonymous blogs.
Why would it be a problem for this discussion to have happened on an anoymous blog? We're scientists: we judge a argument on the basis of its data and its correctness, not on who makes it.
DeletePost-publication Peer Review strikes me as rather a grandiose term form something rather basic: scientific debate and discussion.
ReplyDeleteOf course this discussion needs to be (and is) free and open, but it is by no means the same as a peer-review process, which is anonymous and controlled and works before publication to, in an ideal world, improve the paper.
The pretentious epithet PPPR seems to aim at elevating mere participants in a debate into some sort of official 'reviewer' rank, perhaps in order to give one's arguments that extra shot of authority?
Well, it varies. Rosie Redfield's blogging about the Arsenic Life paper certainly rose to the level of peer-review, and essentially filled the gap left by the failure of the pre-publication peer-reviewers of that paper to do their job.
DeleteBut of course not all blog commenting is on that level!
Hi Franck,
ReplyDeleteit is surprising to see you here in a blog :-)
We had no doubt that you would not like our results, not for a methodological point of view but just because we found again that dyslexia is not ONLY a phonological problem :-)
Again please note that our study is not unpowered, you may did not read our reply in the previous Prof. Bishop post.
Regarding the number of measures for each cell we absolutely not have problems because for both attentional and reading measurements we had around 50 measurements for each cell or even above.
See you in Oxford next April,
Best,
Andrea and Simone
A very interesting post and a very interesting argument. I think this blogpost and the comments afterwards are themselves argue in favour of review by blog, whichever side one takes. This sort of wider discussion has also become the norm in many areas of physics thanks to the ArXiv.
ReplyDeleteThe problem is that this is not always true; for example, a brief perusal of many 'climate skeptic' blogs shows a frightening world of careful, peer-reviewed papers being rubbished by bloggers who know just enough science to sling mud, but not enough to properly evaluate the paper (and that's the more reasonable ones). Such blogs often have a huge following and lead directly to misleading media comment. The problem is so serious that many bloggers like this one are beginning to wish blogs had never been invented...
Kind regards, Cormac
I agree with you on that. Especially, a scientific article that takes on certain cultural or religious issues, bear the risk of getting criticized or "Post Pub-reviewed" by hundreds of blogs filled with "science-enthusiasts" and critics of science itself.
DeleteSo for an author to go and debate all over those blogs is not possible.
Hence, what we would need is an open platform where real scientists in the field does post-pub review and discussion. I think http://pubpeer.com/ is created for that purpose.
But we need to keep the option for anonymous comments too. otherwise many of the juniors (even some seniors) wouldn't risk criticizing top guns.
"Anonymous peer-review" may be working in some cases. But not always.
ReplyDeleteI personally know a PI, who just published his article in Cell Death and Differentiation (a nature pub group journal) without peer-review.
The Editor, who holds the right to decide whether or not the article need to be peer-reviewed, decided the work is good enough for publication without any further review.
Here it gets nasty when you know that the editor and the PI are thick friends (almost bedfellows), who calls each other over phone every week to share professional gossips and chitchats.
Just saying that not every peer-reviewed article need not be anonymously and rigorously peer-reviewed.
- Anonymous
They are really proud of their work and do not think they have anything to hide, they would even take you up on your bet.
ReplyDeleteI think is due to a blissful unawareness of all the problems surrounding small samples, researcher degrees of freedom and so on. Others, I think, seeking to commercialise an intervention as soon as possible (e.g. Cogmed) knowingly exploit these problems with science and surely deserve a public lashing more.
I'd rather not get my articles peer reviewed I feel they are not open to any other ideas.
ReplyDeleteI think video games too are a waste of time and blogging is so much more healthy for you.