首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
3.
All the opinions in this article are those of the authors and should not be construed to reflect, in any way, those of the Department of Veterans Affairs.

Background

Our study purpose was to assess the predictive validity of reviewer quality ratings and editorial decisions in a general medicine journal.

Methods

Submissions to the Journal of General Internal Medicine (JGIM) between July 2004 and June 2005 were included. We abstracted JGIM peer review quality ratings, verified the publication status of all articles and calculated an impact factor for published articles (Rw) by dividing the 3-year citation rate by the average for this group of papers; an Rw>1 indicates a greater than average impact.

Results

Of 507 submissions, 128 (25%) were published in JGIM, 331 rejected (128 with review) and 48 were either not resubmitted after revision was requested or were withdrawn by the author. Of 331 rejections, 243 were published elsewhere. Articles published in JGIM had a higher citation rate than those published elsewhere (Rw: 1.6 vs. 1.1, p = 0.002). Reviewer quality ratings of article quality had good internal consistency and reviewer recommendations markedly influenced publication decisions. There was no quality rating cutpoint that accurately distinguished high from low impact articles. There was a stepwise increase in Rw for articles rejected without review, rejected after review or accepted by JGIM (Rw 0.60 vs. 0.87 vs. 1.56, p<0.0005). However, there was low agreement between reviewers for quality ratings and publication recommendations. The editorial publication decision accurately discriminated high and low impact articles in 68% of submissions. We found evidence of better accuracy with a greater number of reviewers.

Conclusions

The peer review process largely succeeds in selecting high impact articles and dispatching lower impact ones, but the process is far from perfect. While the inter-rater reliability between individual reviewers is low, the accuracy of sorting is improved with a greater number of reviewers.  相似文献   

4.
5.
6.
7.
Data “publication” seeks to appropriate the prestige of authorship in the peer-reviewed literature to reward researchers who create useful and well-documented datasets. The scholarly communication community has embraced data publication as an incentive to document and share data. But, numerous new and ongoing experiments in implementation have not yet resolved what a data publication should be, when data should be peer-reviewed, or how data peer review should work. While researchers have been surveyed extensively regarding data management and sharing, their perceptions and expectations of data publication are largely unknown. To bring this important yet neglected perspective into the conversation, we surveyed ∼ 250 researchers across the sciences and social sciences– asking what expectations“data publication” raises and what features would be useful to evaluate the trustworthiness, evaluate the impact, and enhance the prestige of a data publication. We found that researcher expectations of data publication center on availability, generally through an open database or repository. Few respondents expected published data to be peer-reviewed, but peer-reviewed data enjoyed much greater trust and prestige. The importance of adequate metadata was acknowledged, in that almost all respondents expected data peer review to include evaluation of the data’s documentation. Formal citation in the reference list was affirmed by most respondents as the proper way to credit dataset creators. Citation count was viewed as the most useful measure of impact, but download count was seen as nearly as valuable. These results offer practical guidance for data publishers seeking to meet researcher expectations and enhance the value of published data.  相似文献   

8.
Roebber PJ  Schultz DM 《PloS one》2011,6(4):e18680
Increased competition for research funding has led to growth in proposal submissions and lower funding-success rates. An agent-based model of the funding cycle, accounting for variations in program officer and reviewer behaviors, for a range of funding rates, is used to assess the efficiency of different proposal-submission strategies. Program officers who use more reviewers and require consensus can improve the chances of scientists submitting fewer proposals. Selfish or negligent reviewers reduce the effectiveness of submitting more proposals, but have less influence as available funding declines. Policies designed to decrease proposal submissions reduce reviewer workload, but can lower the quality of funded proposals. When available funding falls below 10-15% in this model, the most effective strategy for scientists to maintain funding is to submit many proposals.  相似文献   

9.
10.

Background

Citation data can be used to evaluate the editorial policies and procedures of scientific journals. Here we investigate citation counts for the three different publication tracks of the Proceedings of the National Academy of Sciences of the United States of America (PNAS). This analysis explores the consequences of differences in editor and referee selection, while controlling for the prestige of the journal in which the papers appear.

Methodology/Principal Findings

We find that papers authored and “Contributed” by NAS members (Track III) are on average cited less often than papers that are “Communicated” for others by NAS members (Track I) or submitted directly via the standard peer review process (Track II). However, we also find that the variance in the citation count of Contributed papers, and to a lesser extent Communicated papers, is larger than for direct submissions. Therefore when examining the 10% most-cited papers from each track, Contributed papers receive the most citations, followed by Communicated papers, while Direct submissions receive the least citations.

Conclusion/Significance

Our findings suggest that PNAS “Contributed” papers, in which NAS–member authors select their own reviewers, balance an overall lower impact with an increased probability of publishing exceptional papers. This analysis demonstrates that different editorial procedures are associated with different levels of impact, even within the same prominent journal, and raises interesting questions about the most appropriate metrics for judging an editorial policy''s success.  相似文献   

11.
12.
13.
Double-blind review favours increased representation of female authors   总被引:1,自引:0,他引:1  
Double-blind peer review, in which neither author nor reviewer identity are revealed, is rarely practised in ecology or evolution journals. However, in 2001, double-blind review was introduced by the journal Behavioral Ecology. Following this policy change, there was a significant increase in female first-authored papers, a pattern not observed in a very similar journal that provides reviewers with author information. No negative effects could be identified, suggesting that double-blind review should be considered by other journals.  相似文献   

14.
Leek JT  Taub MA  Pineda FJ 《PloS one》2011,6(11):e26895
Peer review is fundamentally a cooperative process between scientists in a community who agree to review each other''s work in an unbiased fashion. Peer review is the foundation for decisions concerning publication in journals, awarding of grants, and academic promotion. Here we perform a laboratory study of open and closed peer review based on an online game. We show that when reviewer behavior was made public under open review, reviewers were rewarded for refereeing and formed significantly more cooperative interactions (13% increase in cooperation, P = 0.018). We also show that referees and authors who participated in cooperative interactions had an 11% higher reviewing accuracy rate (P = 0.016). Our results suggest that increasing cooperation in the peer review process can lead to a decreased risk of reviewing errors.  相似文献   

15.
16.
1. Two senior ecologists summarised their experience of the scientific publication process ( Statzner & Resh, Freshwater Biology, 2010 ; 55 , 2639) to generate discussion, particularly among early career researchers (ECRs). As a group of eight ECRs, we comment on the six trends they described. 2. We generally agree with most of the trends identified by Statzner & Resh (2010) , but also highlight a number of divergent perspectives and provide recommendations for change. Trends of particular concern are the use of inappropriate metrics to evaluate research quality (e.g. impact factor) and the salami slicing of papers to increase paper count. We advocate a transparent and comprehensive system for evaluating the research. 3. We stress the importance of impartiality and independence in the peer review process. We therefore suggest implementation of double‐blind review and quality control measures for reviewers and possibly editors. Besides such structural changes, editors should be confident to overrule biased reviewer recommendations, while reviewers should provide helpful reviews but be explicit if a submission does not meet quality standards. Authors should always conduct a thorough literature search and acknowledge historical scientific ideas and methods. Additionally, authors should report low‐quality copy editing and reviews to the editors. 4. Both early and late career researchers should jointly implement these recommendations to reverse the negative trends identified by Statzner & Resh (2010) . However, more senior scientists will always have to take the lead with respect to structural changes in the publication system given that they occupy the majority of decision‐making positions.  相似文献   

17.
18.

Background

Peer review of grant applications has been criticized as lacking reliability. Studies showing poor agreement among reviewers supported this possibility but usually focused on reviewers’ scores and failed to investigate reasons for disagreement. Here, our goal was to determine how reviewers rate applications, by investigating reviewer practices and grant assessment criteria.

Methods and Findings

We first collected and analyzed a convenience sample of French and international calls for proposals and assessment guidelines, from which we created an overall typology of assessment criteria comprising nine domains relevance to the call for proposals, usefulness, originality, innovativeness, methodology, feasibility, funding, ethical aspects, and writing of the grant application. We then performed a qualitative study of reviewer practices, particularly regarding the use of assessment criteria, among reviewers of the French Academic Hospital Research Grant Agencies (Programmes Hospitaliers de Recherche Clinique, PHRCs). Semi-structured interviews and observation sessions were conducted. Both the time spent assessing each grant application and the assessment methods varied across reviewers. The assessment criteria recommended by the PHRCs were listed by all reviewers as frequently evaluated and useful. However, use of the PHRC criteria was subjective and varied across reviewers. Some reviewers gave the same weight to each assessment criterion, whereas others considered originality to be the most important criterion (12/34), followed by methodology (10/34) and feasibility (4/34). Conceivably, this variability might adversely affect the reliability of the review process, and studies evaluating this hypothesis would be of interest.

Conclusions

Variability across reviewers may result in mistrust among grant applicants about the review process. Consequently, ensuring transparency is of the utmost importance. Consistency in the review process could also be improved by providing common definitions for each assessment criterion and uniform requirements for grant application submissions. Further research is needed to assess the feasibility and acceptability of these measures.  相似文献   

19.
《朊病毒》2013,7(6):441-442
Prion is grateful for the ongoing support of its peer reviewers, who ensure that the submissions accepted for publication in Prion continue to be of the highest standard. We very much appreciate their time and the thoughtful reviews they provide. We would like to thank the following peer reviewers for their assistance in 2013:  相似文献   

20.

Background

Adverse events are poor patient outcomes caused by medical care. Their identification requires the peer-review of poor outcomes, which may be unreliable. Combining physician ratings might improve the accuracy of adverse event classification.

Objective

To evaluate the variation in peer-reviewer ratings of adverse outcomes; determine the impact of this variation on estimates of reviewer accuracy; and determine the number of reviewers who judge an adverse event occurred that is required to ensure that the true probability of an adverse event exceeded 50%, 75% or 95%.

Methods

Thirty physicians rated 319 case reports giving details of poor patient outcomes following hospital discharge. They rated whether medical management caused the outcome using a six-point ordinal scale. We conducted latent class analyses to estimate the prevalence of adverse events as well as the sensitivity and specificity of each reviewer. We used this model and Bayesian calculations to determine the probability that an adverse event truly occurred to each patient as function of their number of positive ratings.

Results

The overall median score on the 6-point ordinal scale was 3 (IQR 2,4) but the individual rater median score ranged from a minimum of 1 (in four reviewers) to a maximum median score of 5. The overall percentage of cases rated as an adverse event was 39.7% (3798/9570). The median kappa for all pair-wise combinations of the 30 reviewers was 0.26 (IQR 0.16, 0.42; Min = −0.07, Max = 0.62). Reviewer sensitivity and specificity for adverse event classification ranged from 0.06 to 0.93 and 0.50 to 0.98, respectively. The estimated prevalence of adverse events using a latent class model with a common sensitivity and specificity for all reviewers (0.64 and 0.83 respectively) was 47.6%. For patients to have a 95% chance of truly having an adverse event, at least 3 of 3 reviewers are required to deem the outcome an adverse event.

Conclusion

Adverse event classification is unreliable. To be certain that a case truly represents an adverse event, there needs to be agreement among multiple reviewers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号