Wednesday, August 26, 2015

Hypothesis: The future of peer review?

If I could recreate the way research results are quality checked and revealed to the world, I would probably change almost all of what is currently done. I think the isolated scientific paper is a product of the 20th century, being imposed on the 21st purely because of inertia. A better solution would be to give a "living paper" to each general research project an individual researcher has. This living paper can then be updated as results change/improve. In such a system I would probably have ~5 living papers so far in my career, instead of ~20 old-style papers. Or, even better, would be a large wiki edited, annotated, moderated and discussed by the science community as knowledge is gained.

Even if you to wish to keep "the paper" as how science is presented, I think that the journal system, while invaluable in the 20th century, also exists in the 21st century only due to inertia. Pre-print servers like the arXiv are already taking care of the distribution of the papers, and the peer review, which is responsible for the quality check side of things, can (and might?) be organised collectively by the community on top of that. But why should we stick with peer review anyway? Could there be a better way?

Firstly, let me stress, peer review is definitely an incredibly effective way to progress knowledge accurately and rapidly. The best ideas are the ones that withstand scrutiny. The better an idea is, the more scrutiny it can withstand. Therefore, holding every idea up to as much scrutiny as possible is the best way to proceed. However, by peer review I simply mean criticism and discussion by the rest of the scientific community. I think the way peer review is currently done, at least what people normally mean by "peer review" is very nearly worthless (and when you factor in the time required to review and respond to review, as well as the money spent facilitating it I'd be tempted to claim that it has a negative impact on research overall). The real peer review is what happens in informal discussions: via emails, at conferences, over coffee, in the corridor, on facebook, in other papers, etc. The main benefit the current method of peer review has is simply that the threat of peer review forces people to work harder to write good papers. If you removed that threat, without replacing it with something else, then over time people would get lazy and paper quality would degrade, probably quite a lot.

But that would only happen if the 20th century form of peer review was removed without replacing it with something from the 21st century. I wrote above that the real form of peer review happens through conversations at conferences, in emails, etc. The rapid access to papers that we get now makes this possible. In the early-mid 20th century, because the (expensive) telephone was the only way to rapidly communicate with anyone outside your own institute, word of mouth would spread slowly. Therefore some a priori tick was needed, that confirmed the quality of a paper, before it was distributed; hence peer review. But now communication can and does happen much more rapidly. Today, if a paper in your field is good, people talk about it. This gets discussed in emails amongst collaborators, which then disperses into departmental journal clubs and the information about the quality of the paper is disseminated like that. It's worth emphasising that, at least in high energy physics and cosmology, this often happens long before the paper is technically "published" via the slow, conventional peer-review.

However, this information probably still doesn't disseminate as widely or as quickly as might be ideal, given the tools of the web today. What would be ideal is to find a way for the discussions that do happen to be immediately visible somewhere. For example, what if, instead of having an anonymous reviewer write a review that only the paper's authors and journal editor ever sees, there was instead a facility for public review (either anonymous or not), visible at the same site where the paper exists, where the authors' replies are also visible, and where other interested people can add their views? The threat of peer review would still be there. If a paper was not written with care, people could add this in a review. This review would remain unless or until the paper was revised. Moreover, negative reviews that would hold up a paper could also be publicly seen. Then, if a reviewer makes unfair criticisms, or misunderstands a paper, the authors could makes this clear and the readers can judge who is correct. Or, even better, the readers can add to the discussion and perhaps enlighten both the authors and the reviewer (with words that all other readers can see)!

One way to achieve this would be to add comments/annotations to the arXiv. For various reasons, the people at arXiv are reluctant to do this. I can empathise with this. ArXiv is probably one of the best things to have happened to the high energy and astrophysics communities (who use it the most) because it gives access to any paper as soon as submitted, without charging the reader, or delaying the access in any way. I am happy that they want to focus on being able to continue to provide this service well.

But this doesn't mean that the service isn't desired. And, now, in fact, something very near to this does exist. It is called Hypothesis and it is a web annotation tool. Essentially, it is a browser plugin that allows you to read and write annotations anywhere on the web. So, if you have the plugin installed and write an annotation on a paper at the arXiv, then I can read it (or vice versa - note that you do need the plugin installed to see annotations). It seems to work very well.

Unfortunately, I don't think in its current form Hypothesis could replace peer review, even if the inertia problem could be overcome. For such a system to work would require a critical mass of people using it before it becomes effective. At present, an annotation is either visible to just the author, or to anyone. If annotations could be restricted to sub-groups then people will be more inclined to write annotations. Then, the particular annotations that the group (e.g. a research group at a university) finds most useful can be made more publicly available, if desired. Also, the ability to be notified (e.g. via email) whenever annotations are written on specific webpages, or websites would be needed. At present I can only see the option to be notified when someone replies to my own annotation. This means that if an annotation is written on a paper at the arXiv, then nobody else knows until someone specifically chooses to look at that paper, meaning most annotations will lie unread for a long time; a time unbounded from above.

Once such features are in place I think it would provide a good working model for a 21st century peer review system. Unfortunately the inertia behind the 20th century system is so large that I don't hold out a huge amount of hope that change will occur (people will do what the funding requires and so long as some major funding sources judge based on "published papers" people will submit to journals). Such a change might therefore require top-down policy change by funding agencies themselves.

In any case, some (20th century) scientists don't even think we would benefit from doing away with 20th century peer review! So opposition is from more than just inertia. Still, you should install the plugin and have a play with its features.

What are your thoughts? Is the current peer review system sub-optimal, would an annotation system be better? If not, why not?

Twitter: @just_shaun

26 comments:

  1. You haven't really explained why you think the current way in which peer review is done is worthless, let alone negative. I know what you mean, of course, and I think I might agree ... you just haven't really spelled it out in this post ;)

    ReplyDelete
    Replies
    1. Yeah that's mostly true. I did have a few more sentences on it, but I cut them when editing. I went for the "it's just not necessary when peer review happens elsewhere anyway" line. I did say it was a big time consumer, which I think is one of the biggest problems with it. If all that time is to be spent, the correspondence may as well be public, to maximise the potential gain to the community from the time each side spends on it.

      I also alluded to low quality reviews (both positive and negative) which would then be visible to all readers for them to make their own minds up.

      It is of course a big topic that any practising scientist will have lots to say about. Though, I think BICEP2 is probably one of the best examples of how conventional peer review gets beaten by 21st century peer review, given that it was a blog and an online video of a talk that first broke the "its probably dust" news, and the original paper ended up published in PRL. I've often wanted to write about peer review with that as a case study. Actually, I think BICEP2 would be a great case study for lots of aspects of science and science communication.

      Delete
  2. I've expressed my thoughts on this many times In The Dark.

    I really don't like the living-paper concept. The idea works well for, say, a historical review, where new material can be added as the field progresses, or for something like the particle-data book. But writing a "paper" then updating it when new knowledge is available? That's fine for an encyclopedia, but not for a paper, for several reasons. First, papers provide a snapshot of what people, or at least the author, thought at the time. This is valuable information. (Don't say that old versions are available. There isn't enough time to read even one version of the papers one wants to read.) Second, science is self-correcting because people write papers which find flaws in other papers. If the flawed papers can be updated, thus presenting a moving target, this will no longer be possible. It's not just the author updating it after the criticism has been published; maybe he publishes something wrong and before the criticism is published manages to update it to remove the object of criticism. Third, when a paper is cited, one would have to cite a specific version, while the reader might be familiar with another version. Too much to read, too short the life.

    The idea behind peer review (which, as you say, might not always work well) is that the peer can make suggestions and then recommend that the paper be accepted (or not), which the editors usually follow. If there is some sort of online rating system, what determines whether a paper is "good enough" and when?

    I think traditional peer review should be kept. Of course, it can and does make use of electronic communication. If a journal is doing peer review badly, submit your paper to another journal.

    ReplyDelete
  3. Another thing about conventional peer review is that it provides some measure of objectivity. Yes, with time, on average good papers might become well known and bad papers vanish into obscurity, but there are many other factors involved, concerning the status of the authors and reviewers, the relationships between them (who is hiring whom), and so on. Getting rid of traditional peer review might lead to an "us and them" mentality, whereas with traditional peer review publication is at least a minimal stamp of quality which induces some people to read a paper they otherwise might not.

    I agree that peer review is not always good, and I have had some bad experiences myself (a long time ago), but at least in my experience usually suggestions have been good. (Of course, if you write perfect papers right away, peer review is a waste of your time.)

    Maybe as a compromise one should push a suggestion I read somewhere (can't recall where): when the paper is published, publish the name of the referee. I think that this would put some pressure on those referees who need it to be a bit more careful. (In the old days, MNRAS papers had a tag-line saying that they had been "communicated" by someone (usually of higher standing than the authors). I don't know what this actually entailed, though.)

    ReplyDelete
  4. I think that's a good walkthrough of the situation...thanks for doing it.

    Personally I think the problems facing science are much deeper and more intractable than the failure of peer review. It failed because the culture and deepseated problems have changed things. Peer review was never a complete out-of-box stand alone solution. It's never been used anywhere else...and that's because it wouldn't work anywhere else. It worked because of all the other unique attributes of science, many of which no longer exist. Rapid progress was the engine. Sciences were merging, while others dividing new fields, failed fields, breakthrough fields. This wouldn't have been possible in non-science without everything blowing apart and the fragments then turning hostile on eachother. Because that's the story of non-science. But the rapid progress and the fact that sciences were expanding and diversifying more living cells than anything seen before. It bred a culture of intrepidness...scientists did not expect or want each other to do anything other than be faithful only to truth. And that was unique in human history, and for the first ever the way what is right and who shall say was being settled was not the "relative power" equation. Science was a unique and miraculous thing. The problem now is that progress has stalled for a few decades. Real scientific progress. And that has taken the energy away. And without the driving energy all the other attributes have eroded away. And I'm afraid that means, things are being re-regulated to the old ways of doing things. Power. And with that all the other things. Fields become groups, and if you're in a group you are loyal to the group. And you are hostile the critics. And groups don't peer review other groups. They don't even read what they are publishing. Nothing is the same when this happens. Citations. there's no progress, so citations increasingly because reciprocal, strategic. Comrades in a big blowjob circle reciprocally citing each other, and icing out those who aren't towing the line. Evolutionary biology literally recreated itself and threw conservative scientists simply by cross citing nonsense, and just ignoring totally the groups that wouldn't walk away from evidence.
    This is a real crisis. Science is unique. It's more like an organism than other human endeavour. It's analogy obviously. But Science can die, in the sense of, if the candle goes out completely we may not be able to get it back. I think we're in a process like that. I hope that candle hasn't gone out. But people need to start taking this much more seriously. If science dies, the future dies.

    ReplyDelete
  5. I agree with Phillip's views on the 'living paper' idea - I think it would rapidly become unwieldy. Most of the papers I've published have gone through, probably, 10 iterations as data has come in, and that was only once we'd got enough data to attempt putting the paper together. If it were possible to just put up a paper at any stage of an investigation I think people would begin very early on to stake their claim in a certain discovery. That would mean reams of tedious updates - it would be like reading someone's lab-book.

    I actually find the current journal system very useful I terms of directing what to read. I suspect the significance of this problem correlates with the size of your field. If you work in a field with only 1 or 2 relevant publications a week then it's manageable to read them all, but if you work in one with 1 or 2 hundred a week then it isn't. I accept that not all papers published in Science, Nature, Cell etc. are outstanding, and that not all outstanding papers go to those journals, but it does give me an immediate measure of their significance before deciding to read. I guess it's most important at the lower end of the spectrum - I can see that a paper has been published in some crappy fake-sounding journal so can give it less (or none) of my time. If everything got stuck up onto a centralised server I'd have no way of knowing this.

    I also disagree with Chris that science has descended into small groups of cross-citing friends. Perhaps its naivety on my part, but my experience is not that big names in a given field revert to the 'you scratch my back...' mentality but instead become quite fierce competitors. I would say that this is actually a bigger problem - since one immovable reviewer (who usually has the luxury of anonymity) can stall a paper for months or years while their own lab quietly reaps the benefits of having seen the unpublished data. However, let's say that cross-citing groups are a problem - I think one of the major checks for that would be the journal system. No matter how many times you cite your friends and they cite you, that's not going to get you into Nature. Journal editors provide a measure of objectiveness with regards to a paper's value that is independent of the peers who review it. Imagine if all papers existed on a single online archive where the only measure of value is the number of citations and the ratings it gets from other researchers. That would really leave the door open for reciprocal back-scratching, since now citations would be more valuable and PIs can cover their friends papers in good comments (and get their students and postdocs to do likewise). There would be less motivation to produce good papers since you could be assured of a warm reception just by dint of who you are and no pesky editors or reviewers to get in the way.

    ReplyDelete
    Replies
    1. All of this isn't to say that I don't think there are problems with the current system. The competition of the journal system is a good motivator, but the fact that so many researchers are judged solely on where they publish rather than what they publish is damaging. It puts ridiculous pressure on PIs who are constantly having to justify their funding, which trickles down to students and postdocs in their lab. Firstly this encourages labs to aim only for sensationalist work that will score highly on the novelty front. Secondly, it leads to the steady rise in data fraud that we've seen in recent years. And thirdly, it makes science less fun to do. This might seem unimportant, but so many good scientists leave research for something less stressful and better paid, it is a waste of our intellectual potential. Also, science is about fostering curiosity and improving lives, so to attempt that in an atmosphere of constrained curiosity and impoverished personal lives seems perverse.

      I would prefer a system where papers are still quality-checked into different journals, but that post-publication commenting is easier and more widely advertised. This could be through a similar system to Hypothesis or organised by each journal individually on their open-access server. We currently don't have an obvious forum for the discussion of research that is actively used by all researchers. I understand in physics the arXiv pre-print server fulfils some of that role, but I'd imagine most people in life sciences haven't even heard of it. I've discussed on this blog before about why I don't think pre-print publication is particularly suited to fields like biology, but I think a post-publication equivalent could work well.

      I also think that peer review as a concept is a good one, and as Shaun says it acts as a nice threat against sloppy publication. However, I think the risk of either poor or over-zealous reviewing is very high. Most papers are only reviewed by 3 or 4 peers, so the weight of any one holds a lot of weight. I would prefer a larger number of reviewers, perhaps discussing the paper in an active online forum along with the authors, so that misinterpretations can be better explained. The whole process would probably be faster than the current system of back-and-forth between author, editor, and reviewers. I would still just like a single, completed paper to get to full publication, though, but perhaps the reviewers-author discussion could be made available online as part of the SI.

      Delete
    2. Hi - just to say I was broad brushing, because it's a very big subject...and hard to talk about the problem from the top level. I'd definitely advise against literal readings, but in fact I didn't say science had broken up into groups of cross citation. I said it was in a process of re-regulating back to that. There's a lot of good science being done, and a lot of people. There's hopefully all to fight for. About citation: It wouldn't be hard to survey it. It doesn't matter if people cite eachother, so long as it's not the same papers in both directions. What matters is what was the cite for? Was it because something was mentioned that that was core to the paper, or was it just background? What is the connection? The cites that matter are those that, were that earlier work not done, the paper citing could not have been written.

      Delete
    3. I don't think online, public peer review as a replacement for private, anonymous peer review would result in back scratching etc. Reviewing would still be something taken seriously. If groups went around just writing platitudes about each other's research, those reviews wouldn't be taken seriously. Over time, people who write good reviews would get noticed and it is those people who's opinions would be trusted.

      There may be a problem with measuring "impact" for future funding. That I concede. Perhaps that's all you were referring to though. I was interpreting your comment as also referring to what papers people will read and think about, etc.

      Measuring impact is something I haven't yet been able to work out how to do in a future, non academic journal world. However I think the way it is done now is also not really particularly good (i.e. what journals and how many citations). The best in either world seems to me to be, in the case of grant applications, a review of the person/proposed project, solicited by the grant panel. This is of course peer review again, but of the proposal, not the finished paper.

      Delete
    4. James, you say that in biological sciences there would be problems with pre-print publication, and also that most people in these fields have not heard of the arXiv. Are these two things related? I mean, do people in the life sciences ignore the arXiv because they don't like publishing pre-prints?

      I ask because there's no need for them to be related at all. A lot of my colleagues and Shaun's, perhaps a majority, only ever post papers to the arXiv AFTER journal publication, or at least acceptance. The benefit of the arXiv here is in providing truly open access to science results, which is completely independent of any discussion about peer review. It would be a tremendous shame if life science was missing out on the benefits of open access (and prolonging the exploitation of academic publishers) because of a conclusion of the two issues!

      Delete
    5. I don't think so - I think it's more that those who have heard of arXiv see it as a physicists' thing so don't pay it much attention. I probably wouldn't have heard about it if I didn't know Shaun, and I have to admit I haven't posted any of my papers there for exactly the reason that it's almost all physics.

      I don't think life sciences is missing out, though. The novel thing about arXiv seems to be the pre-print option rather than the open access. If you only post papers after they're published then I don't see how that's any different to PubMed since most funding bodies are happy to (and, indeed, demand to) pay for immediate open-access archiving if you publish in a closed-access journal.

      Delete
    6. I guess life sciences is missing out because those funds from the funding agency could go to better things. How much does it tend to cost for immediate open-access?

      It's your paper, written by you, why should you have to pay for it to be made public? You could just upload a pdf to your own website.

      Delete
    7. Well most journals are immediately open-access anyway so the payment only applies to a minority. It typically costs about 1000 USD to go open-access. I suppose the reason why funding bodies are willing to pay that is that they recognise the increased impact publication in certain journals can have, otherwise they would say don't bother just stick it on arXiv or equivalent.

      It's true that it's your work, but it's not just your paper - the funding body and the institution also have intellectual property rights over the data in it. You'd run into all sorts of legal issues if you decided just to make data public by a non-standard channel, such as your own website.

      Delete
    8. "... the funding body and the institution also have intellectual property rights over the data in it"

      Well, exactly. I was too flippant by saying it's "your" work. Your are right that of course it is your work, done while employed by an institution, funded by a funding agency. Those three entities have some ownership over the work. So why should those three entities have to pay some other entity $1000 to show this work to other people when it can be done for free, allowing the $1000 to be put towards future research. I can't fathom how that isn't ridiculous.

      Regarding impact factors, it would seem far more likely that it is good papers improving the impact of a journal than good journals improving the impact of a paper. I would be interested to know of any studies providing evidence as to which direction is more dominant. Why not spend some fraction of that $1000 on marketing the paper directly to increase its impact?

      Delete
    9. This isn't the 20th century where we need to pay for the costs of the paper in journals, and the printing costs and distribution costs to ship that paper all around the world, just to give people the possibility to read our papers. ("Our" being applied to scientists, institutions and funding bodies who all add genuine value towards the production of research output)

      Delete
    10. Ok, I don't know how I seem to have ended up defending what some journals charge for open-access - for the record I also think it's extortionate to charge that much for essentially putting it on a server. However, if I'm being devil's advocate I would say that the journals provide a service to the authors of a paper by providing increased impact than the paper would have in other journals or just on an open-access server. The cost of this service (the editors, other staff, upkeep of online tools, and many other overheads) is met largely through the subscription fee paid by individuals and institutions. If an author wants their paper to be open-access then they essentially undermine the means by which the journal is funded, since no-one's going to pay for a subscription if it's all free online anyway. Hence the open-access fee. I can't speak for how much the $1000 average truly represents the cost of the whole process, but I imagine that's the logic behind it.

      My original point was that if you were only to use arXiv as a post-publication server then I don't see how it's any different to PubMed. So, I don't think the issue is that life scientists are against online archiving, more that the pre-print approach is harder to sell in a heavily experimental field.

      Delete
  6. "I also think that peer review as a concept is a good one, and as Shaun says it acts as a nice threat against sloppy publication."

    I'm sure that's correct, but what would change would be the scope of what is covered under the umbrella 'sloppy'. A sloppily put together paper in terms of format, figures, cites, clarity: certainly this still appears to be going strong.
    But in terms of usefulness, impact studies just look at cites. But if you step back and look at networks of cites, what you see is that there's a hell of a lot of marginally different studies all at the same sort of level, all which cite other work at roughly the same level. I think in times gone by, this would have garnished a lot of interest, concern...then research out of which a lot of people would have flocked to that area as a matter of science in and of itself. Because if it turns out all we're doing is flooding journals with work that is all basically just being caught at the same bottleneck, then it isn't impacting to do another paper like that, unless there is an intensifying impulse, that is being received by a substantial other component in science that is working at that bottleneck. It appears that this may not be true, for a substantial period. But we don't know because the only people looking at the direction are some isolated concerned bloggers, a few scientists writing lonely reports. Science needs large scale interest in the core arterial line of progress. But there's a misconception that has taken root, that 'fundamental science' means 'fundamental reality' or 'physics'...something empirical. But anything that is blocking scientific progress is FUNDAMENTAL.

    ReplyDelete
  7. This comment has been removed by the author.

    ReplyDelete
  8. Thanks everyone for the comments on this.

    Regarding people's hesitancy about living documents replacing papers I won't say *much* as that was meant to just be a throw away comment in the post itself. I do stand by that throwaway comment, but I'll try to justify the comment and provide an argument for it in its own post... one day. One response I will make is that it seems to me that much (though not all) of the criticism of living documents being presented comes from not embracing the concept *completely* and still thinking from the perspective of individual papers. For example, your living document cites someone else's living document, I wouldn't say you need to cite a specific version, and if their living document is changed such that you no longer wish to cite it, you can remove your citation. Also, if people pull crappy stunts like changing their document to make it look like they came up with an idea you could write a comment on it pointing this out, with evidence, and the people who pull such stunts will get bad reputations and people won't trust their other documents as much. Science is still self-correcting, you can still criticise other people's living documents and best-case-scenario they do correct their document, with appropriate reference to yours.

    The other major criticism from a couple of comments was along the lines of "there are too many papers already", this confuses me because living documents would substantially reduce the number of papers being written - for my own research output I estimated by a factor of four! Thus, when examining the literature, there could be four times fewer papers to read, so, even if those papers are changing over time, the total required time/effort to keep up would still, likely reduce.

    Finally, having a living document doesn't mean you need to update it for every single bit of research done, as it is done. My kind of idea for how it would work was more that if/when you would write a paper under the current system, instead of publishing it as an entirely new paper, if it instead was mostly building on the results of an earlier one, you can just update the earlier one. This saves so much time for you and the community because you don't have to re-write an intro from scratch, or refer to a bunch of other papers in the intro for "additional details" that become very hard to extract from the other papers because methods gradually improve over time. And, you don't have papers that are 3-4 years old with results that have now been superseded still being seen as correct, you can just remove those results, or update them.

    But anyway, I haven't tried to make a case for exactly what *I* mean by a "living document" so I won't defend it further until I have! Thanks very much for the comments though, they will help my future thinking, I'm sure.

    ReplyDelete
    Replies
    1. I like the lifelong-wiki accretion model.

      After chewing on this for a few days I realize the "packaging" value of the traditional paper could be delivered by snapshotting a subset of the wiki into a frozen document. And I that I had already done this myself, as a "Wiki Ebook".

      http://webseitz.fluxent.com/wiki/WikiEbook

      Delete
  9. Regarding peer review, I didn't mean to imply that the current system is broken or seriously damaging science, more just that it doesn't *add* value, and perhaps, because of the time (and money) taken to implement it, this means it does a small amount of harm. I think science is in a healthy state, because of the other type of peer review, which is the discussion/criticism/engagement that happens outside of the official journal process.

    I just also think that if there are better ways to do something that they may as well be pursued. James, your example where you say you "would prefer a larger number of reviewers, perhaps discussing the paper in an active online forum along with the authors..." sounds almost exactly what I would prefer and what something like hypothesis can provide. You don't need academic journals for this, just a communally maintained preprint server and commenting/annotating software. For making the decision of which papers to read you could just read the ones generating the most discussion, or the ones with the most "upvoted" positive reviews (or the ones positively reviewed by people you trust). I can't see how that system could be worse than the system of reading only those papers that the people at Nature decided to publish.

    I suppose it's worth me making a confession. I haven't read a paper, that exists on the arXiv, at any source outside of the arXiv, for years. Moreover, I haven't checked to see whether a paper is published before reading it, for years (or ever!). Many papers I do read and find interesting have already been cited 5-10 times (or, sometimes, vastly more) by the time they are "published". The quantity of papers of potential interest *might* have an impact here, but I do need to sift through >100 potential titles/abstracts each week to find those of interest. I assume that the interesting ones I miss will be brought up by a colleague at journal clubs, or I will see someone speak about it at a conference, or it will eventually be referred to in a paper I do read and I'll know about it from that... or maybe even a blog will talk about it, etc.

    So, from my perspective, journal based peer review really is adding no value to my life as a scientist, except making the people who upload papers to the arXiv try to make their papers publishable.

    ReplyDelete
    Replies
    1. I guess the main difference between what I would like and the kind of pre-print server with annotation software that you describe is that I would prefer manuscripts to remain confidential until the review process is complete. I think this is important in experimental fields where the interpretation of data might change depending on key experiments suggested by reviewers and people may not want their first interpretations out in public, not least because the strategy may be stolen and hastily published elsewhere with the benefit of additional work that the competitor had already done. I also don't think that the reviewing process should just be open to everyone in the field, that would make it very difficult to sort the poor reviews from the legitimate. I think it's natural to be quite defensive about your own work - I certainly have felt aggrieved in the past with some reviewers' comments. If this is just any old Tom, Dick, or Harry it would be very easy to dismiss comments as unimportant. In fact, even if it's a well-respected researcher it would still be tempting to think that they just haven't really put the time into fully understanding your paper. With the current system, imperfect though it is, you can't just ignore comments, and (in my experience) often come to realise that in fact improvements could be made.

      Both of these points require some form of editorial process to sift through submitted manuscripts, assign reviewers, and have the final say over when a paper is ready to go public.

      Delete
    2. Thanks, these are useful opinions to know. I guess there are three points, 1) Fear of being scooped in some way, 2) Fear of looking stupid in some way and 3) Concern that too many reviews to respond to would become difficult and that there needs to be some accountability to force authors to engage with the reviews.

      Regarding 1) I'm kind of confused because once your article is out on a preprint server you *can't* be scooped. Your result is already public. I appreciate that *today* publication in a journal counts and so there might be a fear that someone else gets an official publication of your technique/results before you; however in a world where preprints are taken seriously (e.g. high energy physics and astrophysics) this becomes irrelevant.

      Regarding 2) I do understand this and empathise with it. From an outside perspective one could argue that this is actually good for science because it forces people to stick to their initial hypotheses, etc; however from a human perspective I do appreciate that this fear would be a substantial obstacle. In fact, some observational groups within the arXiv physics fields do actually wait for publication before submitting to the arXiv. I guess in theoretical physics you can be more confident because there can't be any mysterious sources of error, it's often just maths. One way though that we do try to mitigate this is by the (quite common) practice of sending a pre-pre-print to people you trust not to scoop you, who you value the opinion of, and who you know will be blunt with you, asking for their opinion of the paper before submitting to the arXiv. This pre-peer review process does sometimes generate useful comments/discussion (and resulted in substantial-ish changes to a paper), but often the people do just apologise for not having the time and promise to read it once it's on the arXiv and say they're sure it's great.

      Regarding 3), well, I'd like to think reputation will be the important factor here. People who just ignore convincing reviews that point out flaws will be doing this visibly. A paper would hopefully have a big social black mark against it when such a situation occurs. This non-response would be discussed at conference, emails, journal clubs, etc. You would simply have to respond, or face the consequences of the community. The issue of too many reviews could in principle become a problem, I do agree. It could become very time consuming to have to constantly re-visit an old paper to reply to another comment, that may or may not have a lot of relevance. Perhaps an upvoting/downvoting (or just upvoting) system on the *comments* (not on the papers, I -at least now- think that's a bad idea) would work here. You could ignore the comments that the community thinks are not relevant and respond to those that the community finds important. If, two years after you publish a paper, someone writes a comment that generates a lot of upvotes, then you will probably want to respond because the chances are that person has noticed something important.

      The applying for funding peer review would still exist (it doesn't seem as redundant to me and I've no clue how to replace it with something better - the only sort of alternative I can think of is crowdsourcing funds, which I don't think I would prefer) and I think this would force accountability in the peer review process. If you had a reputation of not taking good comments seriously this would show up in your reviews when you apply for grants. The very person who's comment you ignored could be asked to review your proposal in the future (or, at least someone they ranted to about your dismissal of their views)!

      Delete
    3. I don't think the issue of being scooped is quite as simple as you make it seem. It's true the that the pre-print server system would mean you couldn't be scooped on individual experiments, but you could still be scooped when it comes to ownership of an idea. For example, you might do an investigation with three key experiments: the first one is good but the second and third get panned by reviewers in full view of the field. The conclusions of your paper are now seen as unreliable, but the first experiment is still out there. A competitor working on similar stuff could use the conclusions from that experiment and combine it with their, still confidential, work to get out a paper that has a lot more significance to the field than yours. Moreover, a lot of funding bodies require that you make all published materials freely available, so the competitor could legitimately demand reagents from you and use it alongside stuff that they've made but isn't published. Life's not fair and I doubt anyone would really remember the one good experiment in an otherwise failed paper even if it turned out to be useful for other studies. At least in the current system your first paper would have been rejected, giving you time to improve the study while the good data is still confidential.

      Delete
    4. Very good points. A really good article. The responses and the other things, show that a widening body of thought finds Science not quite itself. Science, is it sickly, perhaps a virus like a cold. So we lay it down gently all tucked in, science in a sick bed. There's nothing we know how to do. Because we do not know what science is, we are ignorant of the anatomy of science. Could be nothing is wrong and things are better than ever. Might be a death bed. We can't tell how sick or well, because, mircacle is the greatest mystery of all. A miracle never before seen, never seen again

      Delete
  10. " I would be interested to know of any studies providing evidence as to which "
    Hi Chris Mannering here. I don't know if this will prove useful but Sir Timothy Gowers was getting stuck into this matter couple year back.

    ReplyDelete

Note: Only a member of this blog may post a comment.