Together with some colleagues, I am reviewing a set of
papers that combine genetic and neuroscience methods. We had noticed wide
variation in methodological practices and thought it would be useful to
evaluate the state of the field. Our ultimate aim of identifying both problems
and instances of best practice, so that we could make some recommendations.
I had anticipated that there would be wide differences
between studies in statistical approaches and completeness of reporting, but I
had not realised just what a daunting task it would be to review a set of
papers. We had initially planned to include 50 papers, but we had to prune it
down to 30, on realising just how much time we would need to spend reading and
re-reading each article, just to extract some key statistics for a summary.
In part the problem is the complexity that arises when you
bring together two or more subject areas, each of which deals with complex, big
datasets. I blogged recently about this.
Another issue is incomplete reporting. Trying to find out whether the
researchers followed a specific procedure can mean wading through pages of manuscript
and supplementary material: if you don’t find it, you then worry that you may
have missed it, and so you re-read it all again. The search for key details is
not so much looking for a needle in a haystack as being presented with a
haystack which may or may not have a needle in it.
I realised that it would make sense to contact authors of
the papers we were including in the review, so I sent an email, copied to each
first and last author, attaching a summary template of the details that had
been extracted from their paper, and simply asking them to check if it was an
accurate account. I realised everyone is busy and I did not anticipate an
immediate response, but I suggested an end of month deadline, which gave people
3-4 weeks to reply. I then sent out a reminder a week before the deadline to
those who had not replied, offering more time if needed.
Overall, the outcome was as follows:
- 15 out of 30 authors responded, either to confirm our template was correct, or to make changes. The tone varied from friendly to suspicious, but all gave useful feedback.
- 5 authors acknowledged our request and promised to get back but didn’t.
- 1 author said an error had been found in the data, which did not affect conclusions, and they planned to correct it and send us updated data – but they didn’t.
- 1 author sent questions about what we were doing, to which I replied, but they did not confirm whether or not our summary of their study was correct.
- 8 did not reply to either of my emails.
I was rather disappointed that only half the authors
ultimately gave us a useable response. Admittedly, the response rate is better
than has been reported for people who request data from authors (see, e.g. Wicherts
et al, 2011) – but providing data involves much more work than checking a
summary. Our summary template was very short (effectively less than 20 details
to check), and in only a minority of cases had we asked authors to provide
specific information that we could not find in the paper, or confirmation of
means/SDs that had been extracted from a digitised figure.
We are continuing to work on our analysis, and will aim to
publish it regardless, but I remain curious about the reasons why so many
authors were unwilling to do a simple check. It could just be pressure of work:
we are all terribly busy and I can appreciate this kind of request might just
seem a nuisance. Or are some authors really not interested in what people make
of their paper, provided they get it published in a top journal?
No comments:
New comments are not allowed.