tag:blogger.com,1999:blog-5841910768079015534.post8511696588717401531..comments2024-03-29T08:40:11.883+00:00Comments on BishopBlog: Blogging in the service of sciencedeevybeehttp://www.blogger.com/profile/15118040887173718391noreply@blogger.comBlogger15125tag:blogger.com,1999:blog-5841910768079015534.post-79892165806519845132012-04-30T09:09:02.308+01:002012-04-30T09:09:02.308+01:00Sorry for arriving late on this post.
I think PNAS...Sorry for arriving late on this post.<br />I think PNAS' "contributed papers" are a real problem.<br />In our area, we all know that Mike Merzenich has a terrible record of letting papers in with a one-sided view on language disorders, some OK but others really poor. <br />If we had time for that (and were prepared for a fierce debate!) we could draw an entire list of those papers and review them collectively...Franck Ramushttps://www.blogger.com/profile/02656240693713885894noreply@blogger.comtag:blogger.com,1999:blog-5841910768079015534.post-77510665607147718882012-04-06T01:52:39.031+01:002012-04-06T01:52:39.031+01:00My comment is somewhat off-topic, as it doesn'...My comment is somewhat off-topic, as it doesn't have to do with neuroscience but blogging in the service of science.<br /><br />There have been a number of papers published that advance the anti-vaccine agenda. Not all of these papers have been published in scholarly or science journals. Almost the only criticism of the methods and techniques of the papers occur in the science blogosphere, rarely in the journals publishing them.<br /><br />Some of the bloggers have used Researchblogging.org but others have not.Liz Ditzhttps://www.blogger.com/profile/03455722013211350247noreply@blogger.comtag:blogger.com,1999:blog-5841910768079015534.post-26948916714913089932012-03-26T06:09:39.559+01:002012-03-26T06:09:39.559+01:00Catherine: Well, my preference would be for interv...Catherine: Well, my preference would be for intervention studies without control groups not to get published at all in the first place. Increasingly, medical journals won't take them. So this is really a plea for journals to adopt methodologically rigorous standards when evaluating intervention papers, as this is an area where a misleading study can lead to people spending a lot of time and money on something that is ineffective.deevybeehttps://www.blogger.com/profile/15118040887173718391noreply@blogger.comtag:blogger.com,1999:blog-5841910768079015534.post-71557401617319271192012-03-25T20:32:31.050+01:002012-03-25T20:32:31.050+01:00very naive... by your criteria about 1/4 of past a...very naive... by your criteria about 1/4 of past articles should be retracted .... (this paper is not, by far, the worst of this sort of thing)... more generally, the more you learn about a specific topic, the more you realize how deeply flawed are many many papers.... tCatherine Kerrhttps://www.blogger.com/profile/00564297752873595703noreply@blogger.comtag:blogger.com,1999:blog-5841910768079015534.post-15332332597495882182012-03-13T09:58:44.387+00:002012-03-13T09:58:44.387+00:00Here are a couple of other posts raising concerns ...Here are a couple of other posts raising concerns about papers published in PNAS through one or other of the less transparent routes:<br /><br /><a href="http://occamstypewriter.org/stevecaplan/2011/10/23/peer-review-and-the-ole-boys-network/" rel="nofollow">Peer review and the "ole boys network"</a><br /><br /><a href="http://j0ns1m0ns.blogspot.com/2011/02/exercise-may-be-good-for-you-but-it.html" rel="nofollow">Exercise may be good for you, but it doesn't boost your memory</a><br /><br />Interestingly, in the case of the second one, PNAS subsequently published a letter that strongly criticised flaws in the original paper:<br /><br /><a href="http://j0ns1m0ns.blogspot.com/2011/04/update-on-exercise-and-memory-story.html" rel="nofollow">Update on exercise and memory story</a><br /><br />Perhaps this is a route somebody might like to go down in relation to the Temple paper, to ensure that its flaws are communicated to the journal's audience?Jon Simonshttps://www.blogger.com/profile/11791855886699827354noreply@blogger.comtag:blogger.com,1999:blog-5841910768079015534.post-27007156938657680492012-03-12T20:58:32.488+00:002012-03-12T20:58:32.488+00:00Following up the disregard/flag ideas, I've le...Following up the disregard/flag ideas, I've left two comments about the half-life of bad/flawed science on Neurocritic's blog piece that Dorothy mentions<br /><br />see http://tinyurl.com/76b32ky<br /><br />Tristram Wyatt<br />www.zoo.ox.ac.uk/group/pheromones/tristramhttps://www.blogger.com/profile/14924212250582474917noreply@blogger.comtag:blogger.com,1999:blog-5841910768079015534.post-5137017784029302012012-03-12T20:56:47.395+00:002012-03-12T20:56:47.395+00:00This comment has been removed by the author.tristramhttps://www.blogger.com/profile/14924212250582474917noreply@blogger.comtag:blogger.com,1999:blog-5841910768079015534.post-80432431395684789122012-03-12T19:05:46.471+00:002012-03-12T19:05:46.471+00:00@NeuroPrefix,
I agree, to an extent, and feel no ...@NeuroPrefix,<br /><br />I agree, to an extent, and feel no need to defend the journal... just make clear what they do & don't do. If someone says they should have authors release data, it's worth pointing out they already do this. (Though one annoyance is that we're required to release our data for free, but PNAS will only release the article open access if we pay them extra)<br /><br />I also think that NAS contributed articles make us notice the problems more, but this is a glamour journal issue in general. The glamour journals are attracted to unexpected results to results that contradict established assumptions. Worrying if methods are under-reported or weak is a secondary concern. Of course, if a result contradicts established ideas, it's more likely to be wrong. There's a slew of Science & Nature papers that are thought provoking & wrong. Sometimes an author had lucky false positive, other times, the methodological flaws are rapidly obvious once the expert community starts to dig into a result.<br /><br />This also goes beyond glamour journals. One could probably count on one hand the number of Alzheimers or MCI studies with fMRI before 2005 that accounted for task performance or partial voluming in atrophied tissue.Dannoreply@blogger.comtag:blogger.com,1999:blog-5841910768079015534.post-8447849875248402192012-03-12T16:35:20.407+00:002012-03-12T16:35:20.407+00:00Related to Dan's point - this is the problem w...Related to Dan's point - this is the problem with PNAS. Most direct submissions are fantastic papers, as Dan's likely is. But I have read some fatally flawed articles that were 'contributed' by members, and these can bring down the reputation of PNAS within the community at large, and lead to people feeling a need to defend the journal. I think it would be in the journal's best interests to restrict publication solely to the more traditional review model.<br /><br />Some choice PNAS contributions I recall are:<br /><br />One paper using pain stimulation to the hand as a way of studying interoception.<br /><br />A paper comparing task deactivation between patient and control groups where the patients performed the task exceedingly poorly while the controls were excellent, and not addressing this issue.NeuroPrefixnoreply@blogger.comtag:blogger.com,1999:blog-5841910768079015534.post-22255739627407130062012-03-11T19:27:51.535+00:002012-03-11T19:27:51.535+00:00As a co-author on a very cool paper that should be...As a co-author on a very cool paper that should be coming out in PNAS soon, I'll want to note that some of your suggestions are actually PNAS policy. For example, while NAS members can still edit their own submissions, this form of submission is no longer allowed if they have any financial interests in the work.<br /><br />In addition, published papers submit their data to an open database. The corresponding author for the paper I'm on, just put some of our fMRI data in a publicly available database per journal requirements. It's a bit fuzzy on what level of processing one needs to submit (i.e. raw of the scanner with every script to create the final figures vs the final contrast maps). Given the amount of data in our paper, we did something in-between that should also be in interesting contribution.<br /><br />You can read about both of these rules at: http://www.pnas.org/site/misc/iforc.shtml<br /><br />For what it's worth, the paper I'm on was a direct submission with an editor we didn't know until acceptance and the oversight & comments regarding methodology were rigorous.Dannoreply@blogger.comtag:blogger.com,1999:blog-5841910768079015534.post-36999839376778031832012-03-11T11:42:22.025+00:002012-03-11T11:42:22.025+00:00Just seen that Neurocritic has already made the ve...Just seen that Neurocritic has already made the very same point:<br /><br />"By "discarding" I meant disregarding the results from flawed articles, not retracting them from the literature entirely."<br /><br />http://t.co/q6RqwBw0Jon Simonshttps://www.blogger.com/profile/11791855886699827354noreply@blogger.comtag:blogger.com,1999:blog-5841910768079015534.post-51565157367628775322012-03-11T11:24:40.371+00:002012-03-11T11:24:40.371+00:00Dorothy, I think we agree on 99% of this. Part of ...Dorothy, I think we agree on 99% of this. Part of our apparent disagreement is down to the word Neurocritic used, "discard". I took this to mean "retract", but in your comment on Daniel Bor's excellent post, you used the word "disregard". I would agree with that. I strongly feel that retraction is too strong a requirement for "flawed" papers (fraud should be retracted, of course). Part of the reason for that is that it is so subjective as to what might constitute serious, and therefore perhaps retractable, flaws.<br /><br />But I completely agree with you that the field should be made aware of flawed papers, and blogging (such as your fantastic last post) and comments on journal websites are a great way of making that happen. We can all then assess the flaws, and learn about how to avoid them in the future and do better science as a result (my point in my earlier tweet). We can also then decide how large a pinch of salt we're going to take when we're considering the findings and perhaps choose to disregard them when we're compiling reviews of the area or otherwise seeking to generalise across studies.Jon Simonshttps://www.blogger.com/profile/11791855886699827354noreply@blogger.comtag:blogger.com,1999:blog-5841910768079015534.post-20843972116378719772012-03-10T20:05:54.064+00:002012-03-10T20:05:54.064+00:00I can see the argument, but I think that we alread...I can see the argument, but I think that we already put too much faith in closed peer review instead of our own critical faculties as readers. The paper you've drawn attention to is just one of many, many, many flawed or problematic studies which are published each year. Most of them go through the full peer review process unscathed while we're all aware of apparently unproblematic studies which are delayed and published in obscure journals because of awkward and unreasonable reviewers. <br /><br />I'd like to think I've never taken a study as read, just because it's been reviewed, and I always like to form my own opinion after reading it. But I can only do this if the paper is out there to read, criticise and understand.<br /><br />In the case of the paper you mention the lack of the control group is really problematic, but the reporting of uncorrected statistics is a less clear cut issue in my view (each of the alternatives have some problems some of which are outlined by Jon Simons and Neuropocalypse in comments on Daniel Bor's blog). In addition I think uncritical overreliance on statistical thresholds as the basis of inference is potentially as problematic as uncritical acceptance of uncorrected stats. At present its rather easy for neuroimagers to claim "activation was restricted to region X" and to draw conclusions based erroneously on the "absence" of activation elsewhere (just as psychologists are prone to overinterpret the absence of effects in e.g., ANOVA). A bad experiment produces activation all over the place, but it is meaningless. Conservative thresholding of the data prevents the reader evaluating such claims. Again, I am arguing in favour of transparency and clarity about the data - thresholds and reviewers just get in between me and the data. <br /><br />It seems to me that deliberately fraudulent, careless or misleading papers are pretty rare. It is important that methods and results are reported fully and accurately, but I am afraid that in analysis and interpretation people (usually other people!) are sometimes just wrong - to err is human. I think the nice thing about science is the systematic way it deals with human error (eventually). <br /><br />I think robust, public criticism of the kind you made in your blog post is a better solution than removing flawed work from the literature.Tom Hartleyhttp://tomhartley.posterous.comnoreply@blogger.comtag:blogger.com,1999:blog-5841910768079015534.post-50317244569423445892012-03-10T18:59:00.618+00:002012-03-10T18:59:00.618+00:00I'm terribly new to blogging, and still very m...I'm terribly new to blogging, and still very much finding my feet, but have been delighted and inspired by the opportunity for debate it brings. I agree that in some ways this medium is a lot better for certain discussions than the standard academic routes, such as peer-reviewed publication, or face to face chats in conferences.<br /><br />I do discuss point 3 quite a bit in my own recent blog (and that forms a portion of the comments discussion too), but briefly, I strongly agree with you, Dorothy, that there are some papers that have negative value, because they mislead. They might only waste the time of a bunch of scientists in failures to replicate, but they could also lead to treatments that do more harm than good.<br /><br />I also totally agree (and wrote about this on Twitter too!) that there is a big distinction in neuroimaging between data and what's reported in papers, which is usually very many steps away from the raw brain-scanning data. It's possible that there is some useful design or method in a bad paper, but the whole point we've been blogging about is that usually this is not the case, and that the design or analysis is deeply flawed, and such poor publications can lead to the adoption of bad experimental habits.<br /><br />There are good arguments (though with ethical issues) for making much imaging data public in collective repositories, and there have been some attempts to do this. For instance, a pool of anatomical MRI data anonymised but linked to details such as age, sex, is a tremendously rich resource that can turn into many new studies. Things get more complicated with functional imaging data, but there are still many uses for this in collective repositories. But here we're not talking about papers at all, just a collaborative raw resource.Daniel Borhttp://www.danielbor.comnoreply@blogger.comtag:blogger.com,1999:blog-5841910768079015534.post-19243374690057811752012-03-10T16:19:42.167+00:002012-03-10T16:19:42.167+00:00it'd be interesting if we established some uto...it'd be interesting if we established some utopian alternative system for disseminating the results of experiments that sidesteps academic journal publication, but right now, if you retract and delete academic papers of poor quality, the methods and results are inaccessible. i think there is often some useful information in poor quality studies, and making that inaccessible is not the answer.ben goldacrehttp://www.badscience.netnoreply@blogger.com