首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 328 毫秒
1.

Background

Conventional scientometric predictors of research performance such as the number of papers, citations, and papers in the top 1% of highly cited papers cannot be validated in terms of the number of Nobel Prize achievements across countries and institutions. The purpose of this paper is to find a bibliometric indicator that correlates with the number of Nobel Prize achievements.

Methodology/Principal Findings

This study assumes that the high-citation tail of citation distribution holds most of the information about high scientific performance. Here I propose the x-index, which is calculated from the number of national articles in the top 1% and 0.1% of highly cited papers and has a subtractive term to discount highly cited papers that are not scientific breakthroughs. The x-index, the number of Nobel Prize achievements, and the number of national articles in Nature or Science are highly correlated. The high correlations among these independent parameters demonstrate that they are good measures of high scientific performance because scientific excellence is their only common characteristic. However, the x-index has superior features as compared to the other two parameters. Nobel Prize achievements are low frequency events and their number is an imprecise indicator, which in addition is zero in most institutions; the evaluation of research making use of the number of publications in prestigious journals is not advised.

Conclusion

The x-index is a simple and precise indicator for high research performance.  相似文献   

2.

Background

The h-index has already been used by major citation databases to evaluate the academic performance of individual scientists. Although effective and simple, the h-index suffers from some drawbacks that limit its use in accurately and fairly comparing the scientific output of different researchers. These drawbacks include information loss and low resolution: the former refers to the fact that in addition to h 2 citations for papers in the h-core, excess citations are completely ignored, whereas the latter means that it is common for a group of researchers to have an identical h-index.

Methodology/Principal Findings

To solve these problems, I here propose the e-index, where e 2 represents the ignored excess citations, in addition to the h 2 citations for h-core papers. Citation information can be completely depicted by using the h-index together with the e-index, which are independent of each other. Some other h-type indices, such as a and R, are h-dependent, have information redundancy with h, and therefore, when used together with h, mask the real differences in excess citations of different researchers.

Conclusions/Significance

Although simple, the e-index is a necessary h-index complement, especially for evaluating highly cited scientists or for precisely comparing the scientific output of a group of scientists having an identical h-index.  相似文献   

3.
4.
In this paper, we assess the bibliometric parameters of 37 Dutch professors in clinical cardiology. Those are the Hirsch index (h-index) based on all papers, the h-index based on first authored papers, the number of papers, the number of citations and the citations per paper. A top 10 for each of the five parameters was compiled. In theory, the same 10 professors might appear in each of these top 10s. Alternatively, each of the 37 professors under assessment could appear one or more times. In practice, we found 22 out of these 37 professors in the 5 top 10s. Thus, there is no golden parameter. In addition, there is too much inhomogeneity in citation characteristics even within a relatively homogeneous group of clinical cardiologists. Therefore, citation analysis should be applied with great care in science policy. This is even more important when different fields of medicine are compared in university medical centres. It may be possible to develop better parameters in the future, but the present ones are simply not good enough. Also, we observed a quite remarkable explosion of publications per author which can, paradoxical as it may sound, probably not be interpreted as an increase in productivity of scientists, but as the effect of an increase in the number of co-authors and the strategic effect of networks.  相似文献   

5.
BackgroundThe need to evaluate curricula for sponsorship for research projects or professional promotion, has led to the search for tools that allow an objective valuation. However, the total number papers published, or citations of articles of a particular author, or the impact factor of the Journal where they are published are inadequate indicators for the evaluation of the quality and productivity of researchers. The h index, proposed by Hirsch, categorises the papers according to the number of citations per article. This tool appears to lack the limitations of other bibliometric tools but is less useful for non English-speaking authors.AimsTo propose and debate the usefulness of the existing bibliometric indicators and tools for the evaluation and categorization of researchers and scientific journals.MethodsSearch for papers on bibliometric tools.ResultsThere are some hot spots in the debate on the national and international evaluation of researchers’ productivity and quality of scientific journals. Opinions on impact factors and h index have been discussed. The positive discrimination, using the Q value, is proposed as an alternative for the evaluation of Spanish and Iberoamerican researchers.ConclusionsIt is very important de-mystify the importance of bibliometric indicators. The impact factor is useful for evaluating journals from the same scientific area but not for the evaluation of researchers’ curricula. For the comparison of curricula from two or more researchers, we must use the h index or the proposed Q value. the latter allows positive discrimination of the task for Spanish and Iberoamerican researchers.  相似文献   

6.
How to quantify the impact of a researcher’s or an institution’s body of work is a matter of increasing importance to scientists, funding agencies, and hiring committees. The use of bibliometric indicators, such as the h-index or the Journal Impact Factor, have become widespread despite their known limitations. We argue that most existing bibliometric indicators are inconsistent, biased, and, worst of all, susceptible to manipulation. Here, we pursue a principled approach to the development of an indicator to quantify the scientific impact of both individual researchers and research institutions grounded on the functional form of the distribution of the asymptotic number of citations. We validate our approach using the publication records of 1,283 researchers from seven scientific and engineering disciplines and the chemistry departments at the 106 U.S. research institutions classified as “very high research activity”. Our approach has three distinct advantages. First, it accurately captures the overall scientific impact of researchers at all career stages, as measured by asymptotic citation counts. Second, unlike other measures, our indicator is resistant to manipulation and rewards publication quality over quantity. Third, our approach captures the time-evolution of the scientific impact of research institutions.  相似文献   

7.
The ever-increasing quantity and complexity of scientific production have made it difficult for researchers to keep track of advances in their own fields. This, together with growing popularity of online scientific communities, calls for the development of effective information filtering tools. We propose here an algorithm which simultaneously computes reputation of users and fitness of papers in a bipartite network representing an online scientific community. Evaluation on artificially-generated data and real data from the Econophysics Forum is used to determine the method''s best-performing variants. We show that when the input data is extended to a multilayer network including users, papers and authors and the algorithm is correspondingly modified, the resulting performance improves on multiple levels. In particular, top papers have higher citation count and top authors have higher h-index than top papers and top authors chosen by other algorithms. We finally show that our algorithm is robust against persistent authors (spammers) which makes the method readily applicable to the existing online scientific communities.  相似文献   

8.

Background

Although being a simple and effective index that has been widely used to evaluate academic output of scientists, the h-index suffers from drawbacks. One critical disadvantage is that only h-squared citations can be inferred from the h-index, which completely ignores excess and h-tail citations, leading to unfair and inaccurate evaluations in many cases.

Methodology /Principal Findings

To solve this problem, I propose the h’-index, in which h-squared, excess and h-tail citations are all considered. Based on the citation data of the 100 most prolific economists, comparing to h-index, the h’-index shows better correlation with indices of total-citation number and citations per publication, which, although relatively reliable and widely used, do not carry the information of the citation distribution. In contrast, the h’-index possesses the ability to discriminate the shapes of citation distributions, thus leading to more accurate evaluation.

Conclusions /Significance

The h’-index improves the h-index, as well as indices of total-citation number and citations per publication, by possessing the ability to discriminate shapes of citation distribution, thus making the h’-index a better single-number index for evaluating scientific output in a way that is fairer and more reasonable.  相似文献   

9.
In order to improve the h-index in terms of its accuracy and sensitivity to the form of the citation distribution, we propose the new bibliometric index . The basic idea is to define, for any author with a given number of citations, an “ideal” citation distribution which represents a benchmark in terms of number of papers and number of citations per publication, and to obtain an index which increases its value when the real citation distribution approaches its ideal form. The method is very general because the ideal distribution can be defined differently according to the main objective of the index. In this paper we propose to define it by a “squared-form” distribution: this is consistent with many popular bibliometric indices, which reach their maximum value when the distribution is basically a “square”. This approach generally rewards the more regular and reliable researchers, and it seems to be especially suitable for dealing with common situations such as applications for academic positions. To show the advantages of the -index some mathematical properties are proved and an application to real data is proposed.  相似文献   

10.
Authorship credit for multi-authored scientific publications is routinely allocated either by issuing full publication credit repeatedly to all coauthors, or by dividing one credit equally among all coauthors. The ensuing inflationary and equalizing biases distort derived bibliometric measures of merit by systematically benefiting secondary authors at the expense of primary authors. Here I show how harmonic counting, which allocates credit according to authorship rank and the number of coauthors, provides simultaneous source-level correction for both biases as well as accommodating further decoding of byline information. I also demonstrate large and erratic effects of counting bias on the original h-index, and show how the harmonic version of the h-index provides unbiased bibliometric ranking of scientific merit while retaining the original''s essential simplicity, transparency and intended fairness. Harmonic decoding of byline information resolves the conundrum of authorship credit allocation by providing a simple recipe for source-level correction of inflationary and equalizing bias. Harmonic counting could also offer unrivalled accuracy in automated assessments of scientific productivity, impact and achievement.  相似文献   

11.

Objective

To compare expert assessment with bibliometric indicators as tools to assess the quality and importance of scientific research papers.

Methods and Materials

Shortly after their publication in 2005, the quality and importance of a cohort of nearly 700 Wellcome Trust (WT) associated research papers were assessed by expert reviewers; each paper was reviewed by two WT expert reviewers. After 3 years, we compared this initial assessment with other measures of paper impact.

Results

Shortly after publication, 62 (9%) of the 687 research papers were determined to describe at least a ‘major addition to knowledge’ –6 were thought to be ‘landmark’ papers. At an aggregate level, after 3 years, there was a strong positive association between expert assessment and impact as measured by number of citations and F1000 rating. However, there were some important exceptions indicating that bibliometric measures may not be sufficient in isolation as measures of research quality and importance, and especially not for assessing single papers or small groups of research publications.

Conclusion

When attempting to assess the quality and importance of research papers, we found that sole reliance on bibliometric indicators would have led us to miss papers containing important results as judged by expert review. In particular, some papers that were highly rated by experts were not highly cited during the first three years after publication. Tools that link expert peer reviews of research paper quality and importance to more quantitative indicators, such as citation analysis would be valuable additions to the field of research assessment and evaluation.  相似文献   

12.

Objectives

This study aimed to compare the impact of Gross Domestic Product (GDP) per capita, spending on Research and Development (R&D), number of universities, and Indexed Scientific Journals on total number of research documents (papers), citations per document and Hirsch index (H-index) in various science and social science subjects among Asian countries.

Materials and Methods

In this study, 40 Asian countries were included. The information regarding Asian countries, their GDP per capita, spending on R&D, total number of universities and indexed scientific journals were collected. We recorded the bibliometric indicators, including total number of research documents, citations per document and H-index in various science and social sciences subjects during the period 1996–2011. The main sources for information were World Bank, SCI-mago/Scopus and Web of Science; Thomson Reuters.

Results

The mean per capita GDP for all the Asian countries is 14448.31±2854.40 US$, yearly per capita spending on R&D 0.64±0.16 US$, number of universities 72.37±18.32 and mean number of ISI indexed journal per country is 17.97±7.35. The mean of research documents published in various science and social science subjects among all the Asian countries during the period 1996–2011 is 158086.92±69204.09; citations per document 8.67±0.48; and H-index 122.8±19.21. Spending on R&D, number of universities and indexed journals have a positive correlation with number of published documents, citations per document and H-index in various science and social science subjects. However, there was no association between the per capita GDP and research outcomes.

Conclusion

The Asian countries who spend more on R&D have a large number of universities and scientific indexed journals produced more in research outcomes including total number of research publication, citations per documents and H-index in various science and social science subjects.  相似文献   

13.
Indicator species (IS) are used to monitor environmental changes, assess the efficacy of management, and provide warning signals for impending ecological shifts. Though widely adopted in recent years by ecologists, conservation biologists, and environmental practitioners, the use of IS has been criticized for several reasons, notably the lack of justification behind the choice of any given indicator. In this review, we assess how ecologists have selected, used, and evaluated the performance of the indicator species. We reviewed all articles published in Ecological Indicators (EI) between January 2001 and December 2014, focusing on the number of indicators used (one or more); common taxa employed; terminology, application, and rationale behind selection criteria; and performance assessment methods. Over the last 14 years, 1914 scientific papers were published in EI, describing studies conducted in 53 countries on six continents; of these, 817 (43%) used biological organisms as indicators. Terms used to describe organisms in IS research included “ecological index”, “environmental index”, “indicator species”, “bioindicator”, and “biomonitor,” but these and other terms often were not clearly defined. Twenty percent of IS publications used only a single species as an indicator; the remainder used groups of species as indicators. Nearly 50% of the taxa used as indicators were animals, 70% of which were invertebrates. The most common applications behind the use of IS were to: monitor ecosystem or environmental health and integrity (42%); assess habitat restoration (18%); and assess effects of pollution and contamination (18%). Indicators were chosen most frequently based on previously cited research (40%), local abundance (5%), ecological significance and/or conservation status (13%), or a combination of two or more of these reasons (25%). Surprisingly, 17% of the reviewed papers cited no clear justification for their choice of indicator. The vast majority (99%) of publications used statistical methods to assess the performance of the selected indicators. This review not only improves our understanding of the current uses and applications of IS, but will also inform practitioners about how to better select and evaluate ecological indicators when conducting future IS research.  相似文献   

14.
15.

Background

Citation analysis has become an important tool for research performance assessment in the medical sciences. However, different areas of medical research may have considerably different citation practices, even within the same medical field. Because of this, it is unclear to what extent citation-based bibliometric indicators allow for valid comparisons between research units active in different areas of medical research.

Methodology

A visualization methodology is introduced that reveals differences in citation practices between medical research areas. The methodology extracts terms from the titles and abstracts of a large collection of publications and uses these terms to visualize the structure of a medical field and to indicate how research areas within this field differ from each other in their average citation impact.

Results

Visualizations are provided for 32 medical fields, defined based on journal subject categories in the Web of Science database. The analysis focuses on three fields: Cardiac & cardiovascular systems, Clinical neurology, and Surgery. In each of these fields, there turn out to be large differences in citation practices between research areas. Low-impact research areas tend to focus on clinical intervention research, while high-impact research areas are often more oriented on basic and diagnostic research.

Conclusions

Popular bibliometric indicators, such as the h-index and the impact factor, do not correct for differences in citation practices between medical fields. These indicators therefore cannot be used to make accurate between-field comparisons. More sophisticated bibliometric indicators do correct for field differences but still fail to take into account within-field heterogeneity in citation practices. As a consequence, the citation impact of clinical intervention research may be substantially underestimated in comparison with basic and diagnostic research.  相似文献   

16.
荒漠草原是我国草原的重要组成部分,对保障粮食安全、生态安全具有极其重要的作用。为了解中国该领域研究态势,运用文献计量方法,基于36年间国内外发表的3651篇论文,分析了发文量、发文来源、被引频次、主要作者、研究机构和研究热点等。结果表明:(1)2000年后关于荒漠草原的论文数量增幅较大,年均发表超160篇,但篇均被引频次、高质量学术期刊发文量和创新性研究成果较少,整体发文质量还有待进一步提高;(2)具有创新性的评价方法和理论研究一直备受关注,而利用荟萃分析方法对某一研究领域进展的分析得到了学者们的认可;(3)韩国栋、宋乃平、卫智军、周广胜等是荒漠草原研究的主要贡献者,内蒙古农业大学和宁夏大学等单位在发文机构合作间的节点上发挥了重要作用,并形成了荒漠草原研究的网络核心,但中文论文研究机构间的合作较英文论文离散程度高,特别是与国外机构的合作相对较少;(4)研究的热点主要集中在人类活动和气候变化对荒漠草原植被-土壤系统的影响、荒漠草原植被生产力及其影响因素、利用孢粉研究植被和环境变迁等方面。未来荒漠草原的研究应加强国内外机构和人员间的合作创新,从多尺度、多维度应用多学科知识、多技术手段,加强人类活动和气候条件相互作用对荒漠草原植物群落结构进化过程、地上与地下生态过程耦联关系、不同营养级之间内在机理的分析和探讨。通过深度挖掘荒漠草原基础数据,以促进高影响力、高质量成果的产出,实现科研成果量变与质变并举、基础研究与实际应用并重,期望能给荒漠草原的相关研究提供借鉴和启发新方向,为中国草地建设和决策提供科学依据。  相似文献   

17.
Ability to assess how solidly one is participating in their research arena is a metric that is of interest to many persons in academia. Such assessment is not easily defined, and differences exist in terms of which metric is the most accurate. In reality, no single production metric exists that is easy to determine and acceptable by the entire scientific community. Here we propose the SP-index to quantify the scientific production of researchers, representing the product of the annual citation number by the accumulated impact factors of the journals whereby the papers appeared, divided by the annual number of published papers. This article discusses such a productivity measure and lends support for the development of unified citation metrics for use by all participating in science research or teaching.  相似文献   

18.
The paper considers the history of how the scientific journal Moscow University Biological Sciences Bulletin (MUBSB) evolved during the last 7 years. It is the English edition of the Russian scientific peer-reviewed journal of the School of Biology of Lomonosov Moscow State University MSU Vestnik (Herald). Series 16. Biology. MUBSB is published by Allerton Press, a member of the Nauka/Interperiodica International Academic Publishing Company since 2007. The rapid progress of MUBSB in recent years is apparently due to the journal having been distributed since 2007 by the internationally renowned Springer publishing consortium that places electronic versions of all articles on its website, which has, to all appearances, led to a manifold increase in the number of journal subscribers. As a result, the number of downloads of MUBSB papers from the publishing company website also raised by an order of magnitude from 2007 to 2013. The growing popularity of the journal is noted to have lead to its inclusion in a number of international databases, and this, in turn, has increased its attractiveness for a large number of authors, including Russian nonmembers of Moscow State University, as well as scientists from research institutes and universities of other countries. The main features of the spectrum of the papers published in MUBSB are briefly considered.  相似文献   

19.
近年来全球频发尼帕病毒疫情,本研究利用文献计量和科学知识图谱分析的方法,对新型人畜共患病毒-尼帕病毒领域1999~2017年的研究热点进行分析,以期了解国际尼帕病毒领域研究现状和趋势,为我国新发和烈性传染病防控及生物安全提供情报参考。本文以"Nipah"为主题词检索文献,截止2018年12月10日,共检索到论文973篇,论文数量总体呈现逐年增长的趋势。美国在尼帕病毒研究领域起步较早,且论文发表数量、论文影响力均排名第一。马来西亚、澳大利亚等国研究机构在尼帕病毒研究领域也占据重要地位,且各国之间合作密切。我国论文发表数量排名第7,但论文篇均被引频次比较靠前,排名第3。研究结果表明,近几年来,世界各国不断深入对尼帕病毒的研究和分析,我国在该领域起步较晚,但目前已有突破性进展,需继续保持深入挖掘和研究的态势,严格防控尼帕病毒引发的疫情,保障公共卫生安全,筑牢国家生物安全的防线。  相似文献   

20.
【背景】放线菌是一类极其重要的微生物,代谢产物丰富,在医药、生物技术、农业和酶工业等领域均有广泛应用。【目的】客观分析放线菌代谢产物的研究进展,为该领域相关工作人员提供有效情报,推动该领域高质量发展。【方法】对Web of Science (WOS)和中国知网(China National Knowledge Infrastructure, CNKI)数据库中放线菌代谢产物的发文数量、发文国家、发文机构、发文期刊、发文出版社、发文作者、被引文章和研究方向进行统计分析,利用H指数对相关影响力进行综合评价,其研究热点、发展趋势通过Cite Space和VOSviewer软件进行可视化分析。【结果】WOS结果显示,放线菌代谢产物研究领域全球影响力最大的国家是美国,影响力最大的机构是美国加利福利亚大学,影响力最大的期刊是美国Applied and Environmental Microbiology,影响力最大的出版社是Elsevier,影响力最大的作者是来自英国约翰英纳斯研究中心微生物学部的Mervyn J Bibb教授。全球放线菌代谢产物领域的主要研究方向是微生物学,研究热点是生物合成。研...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号