tag:blogger.com,1999:blog-5841910768079015534.post3944939112799196330..comments2024-03-29T02:47:14.585+00:00Comments on BishopBlog: Will traditional science journals disappear?deevybeehttp://www.blogger.com/profile/15118040887173718391noreply@blogger.comBlogger7125tag:blogger.com,1999:blog-5841910768079015534.post-40041966848002094582015-06-20T15:43:34.147+01:002015-06-20T15:43:34.147+01:00Many thanks for the interesting link to the RS mee...Many thanks for the interesting link to the RS meeting. Section IIb contains a nice discussion of the physics arXiv, but I would like to have seen more on this.Many of us physicists find the dual system works very well - ArXiv for distribution, but serious, detailed reviews from the relevant journal.<br />Looking forward to meeting you at the Boyle Summer School!Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-5841910768079015534.post-51344025135454838652015-06-18T13:34:47.674+01:002015-06-18T13:34:47.674+01:00Thanks for the reply. Just to follow up on this di...Thanks for the reply. Just to follow up on this discussion.<br /><br />I don't think that expliciting a rule against impact-factor based evaluation, following the strategy of the Welcome Trust, will solve the issue. On the contrary, I would argue that impact factors are heavily influential, not so much because the scientific community is formally asked to use them (by funders or their hierarchy), but because impact factors provide a shortcut. Reading ones' papers in details and evaluating the researchers' competence is much more time-consuming than checking up the journal where his/her works has already been evaluated. So my bet would be that as long as these metrics are not supplanted by better ones, they will remain influential.<br /><br />Regarding anonymity, we apparently have different intuitions and experiences, but I agree that each solution has its own advantages. <br /><br />I believe, however, that most anonymity-induced issues can be resolved. For instance, reviewing architectures a la StackExchange where reputation points come with useful questions and comments have prooved remarkably efficient. By contrast, signed reviews will always comes with the conflicts of interest that rise from our professional, social and economic relationships.JR Kingnoreply@blogger.comtag:blogger.com,1999:blog-5841910768079015534.post-86812906855797149982015-06-10T07:20:36.426+01:002015-06-10T07:20:36.426+01:00Thanks for commenting. I am not swayed by your arg...Thanks for commenting. I am not swayed by your arguments but good to think through why:<br />Re 1. More enlightened funders do not use the Journal Impact Factor. It is eschewed, and rightly so, by the UK's Higher Education Funding Council and Wellcome Trust, who are signatories on DORA (San Francisco Declaration on Research Assessment). Agree it is a current problem in some circles but I'm optimistic the tide is turning against it: See here; http://cdbu.org.uk/universities-journal-impact-factors/<br />re point 2, if there is pre-registration, then priority for an idea is date-stamped in the protocol. Any theft of ideas would be detectable. (I have to say, in my area, this is seldom an issue anyhow, because nobody does exactly the same thing)<br />re point 3: I just disagree! There are upsides and downsides of anonymity, but the downsides are much worse in my view. You might want to check out the reviews of my most recent paper in PeerJ (which are public); I do not see any fawning; there is one ultra-nice one but it is by a very senior figure in the field who does not need to suck up to me; the most critical one is by a junior person. My attitude to the junior person is respect, not a desire to get my own back in future! And the fact that others can see her review can only benefit her reputation as someone who can critique a study well.<br />deevybeehttps://www.blogger.com/profile/15118040887173718391noreply@blogger.comtag:blogger.com,1999:blog-5841910768079015534.post-74431246940485173082015-06-09T13:48:16.398+01:002015-06-09T13:48:16.398+01:00These are interesting suggestions; I have a few re...These are interesting suggestions; I have a few remarks though.<br /><br />1. The funding system largely relies on publication metrics, as it's a way to quickly evaluate researchers (Of course, this argument can be a legitimate debate in itself, but the point is that decision makers appear to fully subscribe to this idea). A simple question thus follows: how would the journals derived from your proposal maximize their impact factor and/or their attractiveness to good studies ? <br /><br />2. Starting the peer review even before the beginning of the project is regularly coming up as an alternative (e.g. http://blogs.discovermagazine.com/neuroskeptic/2013/07/13/4129/#.VXba5V2-Rx0). <br /><br />But a major issue remains undiscussed (as far as I know): ideas are not so cheap, and scientific competition can be fierce. We all encountered cases where two different labs release similar works simultaneously, and may be tempted to delay the reviewing process of their opponent if given the opportunity to do so. Making a statements of what you will do, why and how you'll do it increases the possibility that some of these ideas will be simultaneously tested elsewhere - perhaps by a bigger lab and by faster people who capitalize on your thought process, and don't even want to go for a slow and heavy pre-submission scheme like the one you suggest.<br /><br />3. In my opinion making de-anonymization an incentive (e.g. for reviewing) can be dangerous.<br /><br />We already work in small-world research areas, and criticising one's work can be detrimental in many regards: young scientists' career and their very limited number of publications are of course at play, but you also bias the whole selection of reviewers if your reviews are known to be polite or tough.<br /><br />Of course, there now exists many ways to raise scientific issies anonymously (e.g. Pubpeer) but a de-anonimization system imposes a small, but I believe systematic, bias towards fawning and authoritative arguments. Worse, you can imagine that future metrics will take reviewing credits into account, and will thus strengthen the reviewers' inclination to make lightened remarks.JR Kingnoreply@blogger.comtag:blogger.com,1999:blog-5841910768079015534.post-3183729513538324902015-05-29T10:52:00.888+01:002015-05-29T10:52:00.888+01:00I like this model in a lot of ways. However, I wo...I like this model in a lot of ways. However, I would feel anxious putting forward a protocol which I had carefully thought through and spent a lot of time on BEFORE I had funding. As we know, many, many research grants are not funded, even when they are highly rated by reviewers. Once your protocol was in the public domain, what is to stop someone else adapting it and applying for funding? Or do we want to encourage that as a way of more rapidly advancing science?<br /><br />I note as well that more and more funders are allowing applicants to respond to reviewer's comments. This does allow you to make minor tweaks to you method after peer-review. Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-5841910768079015534.post-64868794718203089312015-05-18T16:13:36.337+01:002015-05-18T16:13:36.337+01:00Very interesting suggestions. Im not sure if it...Very interesting suggestions. Im not sure if it's feasible in the actual state though. Many studies are approved by IRB and funded under a certain protocol. Changing that protocol afterwards based on reviewers comments might by awfully difficult. And submitting an idea of a protocol before even getting approved or funded sounds tricky.<br />God knows i'm all in favor of collaboration and i find actual scientific research way too individualistic, so i'm supporting you on that. But the funding problem remains. Grants are getting smaller and rarer. Obtaining a grant for 50 people is not easy and being a small team does not mean that you will produce underpower results.Rodolphehttps://www.blogger.com/profile/06359856059350959406noreply@blogger.comtag:blogger.com,1999:blog-5841910768079015534.post-59333062242845294452015-05-17T17:36:36.239+01:002015-05-17T17:36:36.239+01:00This sounds like a solid system to me. Particularl...This sounds like a solid system to me. Particularly the post-publication of method grant award. I proposed this 'Kickstarter' type of funding some years ago on http://researchity.net. <br /><br />But I'd like to point out a few alternatives/stumbling blocks/improvements:<br /><br />1. Why charge at all? $99 may seem like a pittance to a professor at a UK university (particularly compared to the extortionate open access fees) but in many countries challenged by exchange rate inequities, this could represent a significant proportion of income - particularly if it precedes award of funding. <br /><br />2. Avoiding trolls or frivolous submissions could be just as well achieved by a Stack Exchange like system of reputation. Where anonymity could even be preserved.<br /><br />3. The system could just as easily be paid for by the Government by taking a fraction of money it funnels into libraries to purchase journals. We should stop pretending that academic publishers are anything but rent-seeking clients who could not survive without massive indirect subsidy from the tax base.<br /><br />4. Your approach is more suited to the experimental branches of scholarly inquiry. It could be modified for things like history, literary studies, linguistics, etc. or even literature surveys as a sort of funding filter.<br /><br />5. You still need to solve the issue of 'disoverability' and 'curation' which could be achieved by allowing for some sort of 'editorial collectives' - groups of people who regularly share the top articles submitted to the archive in a sort of 'journal-like' manner.<br /><br />6. One way to solve participation is to include it in the evaluation of institutions for funding purposes.<br /><br />7. The other issue this could solve is the obsession with format and length limitations.<br /><br />8. You can also start at a much earlier stage and involve a larger group of stakeholders to help formulate relevant ideas. I've suggested something like that in http://researchity.net/2011/03/02/community-research-and-knowledge-exchange-network-for-neuroscience/.Dominik Lukešhttps://www.blogger.com/profile/03071876778771965740noreply@blogger.com