首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.

Background

Citation data can be used to evaluate the editorial policies and procedures of scientific journals. Here we investigate citation counts for the three different publication tracks of the Proceedings of the National Academy of Sciences of the United States of America (PNAS). This analysis explores the consequences of differences in editor and referee selection, while controlling for the prestige of the journal in which the papers appear.

Methodology/Principal Findings

We find that papers authored and “Contributed” by NAS members (Track III) are on average cited less often than papers that are “Communicated” for others by NAS members (Track I) or submitted directly via the standard peer review process (Track II). However, we also find that the variance in the citation count of Contributed papers, and to a lesser extent Communicated papers, is larger than for direct submissions. Therefore when examining the 10% most-cited papers from each track, Contributed papers receive the most citations, followed by Communicated papers, while Direct submissions receive the least citations.

Conclusion/Significance

Our findings suggest that PNAS “Contributed” papers, in which NAS–member authors select their own reviewers, balance an overall lower impact with an increased probability of publishing exceptional papers. This analysis demonstrates that different editorial procedures are associated with different levels of impact, even within the same prominent journal, and raises interesting questions about the most appropriate metrics for judging an editorial policy''s success.  相似文献   

3.
4.
5.
The Ecological Society of Australia was founded in 1959, and the society’s journal was first published in 1976. To examine how research published in the society’s journal has changed over this time, we used text mining to quantify themes and trends in the body of work published by the Australian Journal of Ecology and Austral Ecology from 1976 to 2019. We used topic models to identify 30 ‘topics’ within 2778 full‐text articles in 246 issues of the journal, followed by mixed modelling to identify topics with above‐average or below‐average popularity in terms of the number of publications or citations that they contain. We found high inter‐decadal turnover in research topics, with an early emphasis on highly specific ecosystems or processes giving way to a modern emphasis on community, spatial and fire ecology, invasive species and statistical modelling. Despite an early focus on Australian research, papers discussing South American ecosystems are now among the fastest‐growing and most frequently cited topics in the journal. Topics that were growing fastest in publication rates were not always the same as those with high citation rates. Our results provide a systematic breakdown of the topics that Austral Ecology authors and editors have chosen to research, publish and cite through time, providing a valuable window into the historical and emerging foci of the journal.  相似文献   

6.
7.
8.

Objective

To compare expert assessment with bibliometric indicators as tools to assess the quality and importance of scientific research papers.

Methods and Materials

Shortly after their publication in 2005, the quality and importance of a cohort of nearly 700 Wellcome Trust (WT) associated research papers were assessed by expert reviewers; each paper was reviewed by two WT expert reviewers. After 3 years, we compared this initial assessment with other measures of paper impact.

Results

Shortly after publication, 62 (9%) of the 687 research papers were determined to describe at least a ‘major addition to knowledge’ –6 were thought to be ‘landmark’ papers. At an aggregate level, after 3 years, there was a strong positive association between expert assessment and impact as measured by number of citations and F1000 rating. However, there were some important exceptions indicating that bibliometric measures may not be sufficient in isolation as measures of research quality and importance, and especially not for assessing single papers or small groups of research publications.

Conclusion

When attempting to assess the quality and importance of research papers, we found that sole reliance on bibliometric indicators would have led us to miss papers containing important results as judged by expert review. In particular, some papers that were highly rated by experts were not highly cited during the first three years after publication. Tools that link expert peer reviews of research paper quality and importance to more quantitative indicators, such as citation analysis would be valuable additions to the field of research assessment and evaluation.  相似文献   

9.
All the opinions in this article are those of the authors and should not be construed to reflect, in any way, those of the Department of Veterans Affairs.

Background

Our study purpose was to assess the predictive validity of reviewer quality ratings and editorial decisions in a general medicine journal.

Methods

Submissions to the Journal of General Internal Medicine (JGIM) between July 2004 and June 2005 were included. We abstracted JGIM peer review quality ratings, verified the publication status of all articles and calculated an impact factor for published articles (Rw) by dividing the 3-year citation rate by the average for this group of papers; an Rw>1 indicates a greater than average impact.

Results

Of 507 submissions, 128 (25%) were published in JGIM, 331 rejected (128 with review) and 48 were either not resubmitted after revision was requested or were withdrawn by the author. Of 331 rejections, 243 were published elsewhere. Articles published in JGIM had a higher citation rate than those published elsewhere (Rw: 1.6 vs. 1.1, p = 0.002). Reviewer quality ratings of article quality had good internal consistency and reviewer recommendations markedly influenced publication decisions. There was no quality rating cutpoint that accurately distinguished high from low impact articles. There was a stepwise increase in Rw for articles rejected without review, rejected after review or accepted by JGIM (Rw 0.60 vs. 0.87 vs. 1.56, p<0.0005). However, there was low agreement between reviewers for quality ratings and publication recommendations. The editorial publication decision accurately discriminated high and low impact articles in 68% of submissions. We found evidence of better accuracy with a greater number of reviewers.

Conclusions

The peer review process largely succeeds in selecting high impact articles and dispatching lower impact ones, but the process is far from perfect. While the inter-rater reliability between individual reviewers is low, the accuracy of sorting is improved with a greater number of reviewers.  相似文献   

10.
11.
12.
13.
14.
15.
Scholarly collaborations across disparate scientific disciplines are challenging. Collaborators are likely to have their offices in another building, attend different conferences, and publish in other venues; they might speak a different scientific language and value an alien scientific culture. This paper presents a detailed analysis of success and failure of interdisciplinary papers—as manifested in the citations they receive. For 9.2 million interdisciplinary research papers published between 2000 and 2012 we show that the majority (69.9%) of co-cited interdisciplinary pairs are “win-win” relationships, i.e., papers that cite them have higher citation impact and there are as few as 3.3% “lose-lose” relationships. Papers citing references from subdisciplines positioned far apart (in the conceptual space of the UCSD map of science) attract the highest relative citation counts. The findings support the assumption that interdisciplinary research is more successful and leads to results greater than the sum of its disciplinary parts.  相似文献   

16.
17.
A number of new metrics based on social media platforms—grouped under the term “altmetrics”—have recently been introduced as potential indicators of research impact. Despite their current popularity, there is a lack of information regarding the determinants of these metrics. Using publication and citation data from 1.3 million papers published in 2012 and covered in Thomson Reuters’ Web of Science as well as social media counts from Altmetric.com, this paper analyses the main patterns of five social media metrics as a function of document characteristics (i.e., discipline, document type, title length, number of pages and references) and collaborative practices and compares them to patterns known for citations. Results show that the presence of papers on social media is low, with 21.5% of papers receiving at least one tweet, 4.7% being shared on Facebook, 1.9% mentioned on blogs, 0.8% found on Google+ and 0.7% discussed in mainstream media. By contrast, 66.8% of papers have received at least one citation. Our findings show that both citations and social media metrics increase with the extent of collaboration and the length of the references list. On the other hand, while editorials and news items are seldom cited, it is these types of document that are the most popular on Twitter. Similarly, while longer papers typically attract more citations, an opposite trend is seen on social media platforms. Finally, contrary to what is observed for citations, it is papers in the Social Sciences and humanities that are the most often found on social media platforms. On the whole, these findings suggest that factors driving social media and citations are different. Therefore, social media metrics cannot actually be seen as alternatives to citations; at most, they may function as complements to other type of indicators.  相似文献   

18.
19.
Papers submitted to Bioscience Hypotheses should be innovative, clear, compatible with at least most of the facts, and testable. Not all good, new, challenging ideas manage these exacting standards. Editorial policy has been altered to include both an increased role for peer advice and an occasional role for editorial advice to authors to bring out the ideas in a form that I think most likely to attract interest from our readership.  相似文献   

20.
This study analyzes funding acknowledgments in scientific papers to investigate relationships between research sponsorship and publication impacts. We identify acknowledgments to research sponsors for nanotechnology papers published in the Web of Science during a one-year sample period. We examine the citations accrued by these papers and the journal impact factors of their publication titles. The results show that publications from grant sponsored research exhibit higher impacts in terms of both journal ranking and citation counts than research that is not grant sponsored. We discuss the method and models used, and the insights provided by this approach as well as it limitations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号