Bajo el título Peer review in scholarly journals: Perspective of the scholarly community – an international
study, se da a conocer hoy el estudio llevado a cabo por la firma inglesa Mark Ware Consulting, para el Publishing Research Consortium.
El Resumen Ejecutivo de este estudio –con las conclusiones del mismo– se transcribe íntegro más abajo.
Methodology
The study was based on an online questionnaire with respondents recruited by email. The questionnaire was developed by us in conjunction with the Publishing Research Consortium and underwent several revisions before being pilot tested on a small subset of the sample. The questionnaire was revised in minor ways following the pilot to eliminate some apparently ambiguous wording and modify the scales used to collect numerical data in some instances.
A copy of the final questionnaire is attached at Appendix 1. The questionnaire was comprehensive – there were a total of 120 separate questions or statements to be tested, though the maximum any individual would undertake was 110.
The questionnaire involved several questions that categorised respondents and routed them to different sets of questions:
• Respondents who had not published in the last 24 months and were not journal editors were screened out of the survey (42)
• All other respondents answered questions about peer review generally (Questions 3–9) and the demographics questions (50-55): 3040 respondents
The filter questions then determined whether respondents answered the following questions as
• “Authors” (all who passed the initial screening questions) answered questions specially about authors’ experience of peer review (Questions 10–23)): 3040 respondents.
• “Reviewers” (respondents who had reviewed at least one paper in the last 12 months) answered Questions 10c–35 relating to the experience of reviewers: 2165 respondents.
• “Editors” (those identifying themselves as such in the initial screening question or at Question 22) skipped the questions for Reviewers in favour of the editors’ questions (Questions 36–49): 632 respondents
The sample list used to invite respondents was primarily sourced from Thomson Scientific, consisting of approximately 40,000 email addresses of authors who had recently published. (The specification of the list was to provide a geographical spread matching the overall database using the most recent authors.) This list was supplemented with a list of journal editors that was available to the PRC that had been created by “scraping” journal websites. Data was collected in November 2007.
Executive summary
This global survey reports on the attitudes and behaviour of 3040 academics in relation to peer review in journals.
Peer review is seen as an essential component of scholarly communication, the mechanism that facilitates the publication of primary research in academic journals. Although sometimes thought of as an essential part of the journal, it is only since the second world war that peer review has been institutionalised in the form we know it today. More recently it has come under criticism on a number of fronts: it has been said that it is unreliable, unfair and fails to validate or authenticate; that it is unstandardised and idiosyncratic; that its secrecy leads to irresponsibility on the part of reviewers; that it stifles innovation; that it causes delay in publication; and so on. Perhaps the strongest criticism is that there is a lack of evidence that peer review actually works, and a lack of evidence to indicate whether the documented failings are rare exceptions or the tip of an iceberg.
The survey reported here does not attempt directly to address the question of whether or not peer review works, but instead looks in detail at the experiences and perceptions of a large group of mostly senior authors, reviewers and editors (there is of course considerable overlap between these groups). Respondents were spread by region and by field of research broadly in line with the universe of authors publishing in the journals in the Thomson Scientific database, which covers the leading peer reviewed journals. The survey presents its findings in two broad areas: attitudes to peer review and current practices in peer review
Attitudes to peer review
1. Peer review is widely supported. The overwhelming majority (93%) disagree that peer review is unnecessary. The large majority (85%) agreed that peer review greatly helps scientific communication and most (83%) believe that without peer review there would be no control.
2. Peer review improves the quality of the published paper. Researchers overwhelmingly (90%) said the main area of effectiveness of peer review was in improving the quality of the published paper. In their own experience as authors, 89% said that peer review had improved their last published paper, both in terms of the language or presentation but also in terms of correcting scientific errors.
3. There is a desire for improvement. While the majority (64%) of academics declared themselves satisfied with the current system of peer review used by journals (and just 12% dissatisfied), they were divided on whether the current system is the best that can be achieved, with 36% disagreeing and 32% agreeing. There was a very similar division on whether peer review needs a complete overhaul. There was evidence that peer review is too slow (38% were dissatisfied with peer review times) and that reviewers are overloaded (see #13 below).
4. Double-blind review was preferred. Changes to peer review in recent years (such as the growth of double-blind
review, and the introduction of open and post-publication review) have attempted to improve the system. Asked which of the four peer review types was their most preferred option, there was a preference for double-blind review, with 56% selecting this, followed by 25% for single-blind, 13% for open and 5% for post-publication review. Open peer review was an active discouragement for many reviewers, with 47% saying that disclosing their name to the author would make them less likely to review.
5. Double-blind review was seen as the most effective. Of the four types of peer review discussed, double-blind
review had the most respondents (71%) who perceived it to be effective, followed (in declining order) by single-blind (52%), post-publication (37%) and open peer review (26%). Respondents did not have personal experience of all types of review and tended to rate more highly the systems they had experienced. It is notable, though, that although 37% of respondents said that post-publication review was effective, only 8% had had experience of it as authors.
6. Double-blind review faces some fundamental objections. Double-blind review was primarily supported because of its perceived objectivity and fairness. Many respondents, including some of those supporting double-blind review, did however point out that there were great difficulties in operating it in practice because it was frequently too easy to identify authors from their references, type of work or other internal clues.
7. Post-publication review was seen as a useful supplement to formal peer review. In terms of recent developments facilitated by technology advances, some 37% thought that post-publication review was effective but only 5% preferred it over other approaches. It is clear that this was because researchers tended to see it as a useful supplement to formal peer review rather than a replacement for it (53% agreed compared to 23% disagreeing). Interestingly, they saw this usefulness despite a clear view that it tends to encourage instant reactions and discourage thoughtful review.
8. No support for replacing peer review with metrics. There was strong opposition to replacing peer review with postpublication ratings or usage or citation statistics to identify good papers, with only 5-7% of respondents supporting these approaches.
9. Mixed support for review of authors’ data. A majority of reviewers (63%) and editors (68%) say that it is desirable in principle to review authors’ data. Perhaps surprisingly, a majority of reviewers (albeit a small one, 51%) said that they would be prepared to review authors’ data themselves, compared to only 19% who disagreed. This was despite 40% of reviewers (and 45% of editors) saying that it was unrealistic to expect peer reviewers to review authors’ data. Given that many reviewers also reported being overloaded, we wonder, however, whether they would still be as willing when it actually came to examine the data.
10. Limited support for payment for reviewers. Respondents were divided on whether reviewers should be paid, with 35% in favour and 40% against payment. A majority, however, supported the proposition that payment would make the cost of publishing too expensive (52% for, 18% against) and the large majority of reviewers (91%) said that they reviewed to play their part as a member of the academic community.
Current practices in peer review
11. Single-blind review was the most commonly experienced. The average respondent had published 60 papers in
their career to date, suggesting they were fairly experienced and productive researchers, and 8 papers in the last 24 months. As authors, respondents’ experience of peer review was mainly of single-blind reviewing (84% said they had experienced this kind of review), followed at some distance by double-blind reviewing (44%). Less than a quarter (22%) reported experience of open peer review, while experience of post-publication review was limited to 8% of respondents.
12. Longer review times was a cause of dissatisfaction. Authors said the peer review of their last published paper took an average of 80 days. They were evenly balanced on whether or not this was satisfactory. There was a clear correlation between the reported time taken for peer review and the author’s satisfaction: 67% were satisfied provided the time was under 30 days, but this dropped to 10% for 3-6 months, and to 9% for longer than 6 months.
13. The most productive reviewers were overloaded. Some 90% of authors were also reviewers, acting regularly for about 3.5 journals and a further 4.2 journals occasionally. They reported reviewing an average of 8 papers in the last 12 months, compared to the maximum of 9 that they said they were prepared to review. Active reviewers, defined as those doing 6 or more reviews in the last 12 months, completed an average of 14 reviews per year, nearly twice the overall figure. This means that although Active reviewers make up 44% of all reviewers, they are responsible for 79% of all reviews. So when this group reports it is over-loaded – doing 14 reviews per year compared to their preferred maximum of 13 – there is clearly a problem.
14. About 20% of invitations to review are declined. As well as completing 8 reviews per year, the average reviewer declined about 2 invitations to review, mainly because of a lack of time. Active reviewers, despite doing more reviews, if anything decline slightly fewer invitations proportionately.
15. The average review takes 5 hours and is completed in 3-4 weeks. Reviewers say that they took about 24 days (elapsed time) to complete their last review, with 85% reporting that they took 30 days or less. They spent a median 5 hours (mean 9 hours) per review.
16. Altruistic reasons for reviewing were preferred over self-interested ones. Substantially the most popular given
was “playing your part as a member of the academic community”. Self-interested reasons such as “to enhance your reputation or further your career” or “to increase the chance of being offered a role in the journal’s editorial team” were much less frequently advanced.
17. The average acceptance rate was 50%. Editors reported that the average acceptance rate for their journals was about 50%, which is consistent with other studies. About 20% are rejected prior to review (either because of poor quality (13%) or being out of scope (8%)) and another 30% are rejected following review. Of the 50% accepted, 40% are accepted subject to revision. Acceptance rates were lower in humanities and social sciences, and higher in physical sciences/engineering journals.
18. Use of online submissions systems. Three quarters of editors (76%) reported that their journal used an online
manuscript submission and tracking system. Their use was most common in life sciences (85%) and least common in humanities and social sciences (51%).
19. Access to journals literature. Some 69% of respondents described their access to the journals literature as good or excellent, with 7% describing it as poor or very poor. This probably represents an improvement in overall access compared to the CIBER 2004 survey (Rowlands et al., 2004), which reported 61% with good/excellent and 10% poor/ very poor (though a different geographical distribution of responses makes direct comparison difficult).
The survey thus paints a picture of academics committed to peer review, with the vast majority believing that it helps scientific communication and in particular that it improves the quality of published papers. They are willing to play their part in carrying out review, though it is worrying that the most productive reviewers appear to be overloaded. Many of them are in fact willing to go further than at present and take on responsibility for reviewing authors’ data.
Within this picture of overall satisfaction there are, however, some sizeable pockets of discontent. This discontent does not always translate into support for alternative methods of peer review; in fact some of those most positive about the benefits of peer review were also the most supportive of post-publication review. Overall, there was substantial minority support for post-publication review as a supplement to formal peer review, but much less support for open review as an alternative to blinded review.
0 Comments