首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.

Background

The impact of scientific publications has traditionally been expressed in terms of citation counts. However, scientific activity has moved online over the past decade. To better capture scientific impact in the digital era, a variety of new impact measures has been proposed on the basis of social network analysis and usage log data. Here we investigate how these new measures relate to each other, and how accurately and completely they express scientific impact.

Methodology

We performed a principal component analysis of the rankings produced by 39 existing and proposed measures of scholarly impact that were calculated on the basis of both citation and usage log data.

Conclusions

Our results indicate that the notion of scientific impact is a multi-dimensional construct that can not be adequately measured by any single indicator, although some measures are more suitable than others. The commonly used citation Impact Factor is not positioned at the core of this construct, but at its periphery, and should thus be used with caution.  相似文献   

2.
Publications are thought to be an integrative indicator best suited to measure the multifaceted nature of scientific performance. Therefore, indicators based on the publication record (citation analysis) are the primary tool for rapid evaluation of scientific performance. Nevertheless, it has to be questioned whether the indicators really do measure what they are intended to measure because people adjust to the indicator value system by optimizing their indicator rather than their performance. Thus, no matter how sophisticated an indicator may be, it will never be proof against manipulation. A literature review identifies the most critical problems of citation analysis: database-related problems, inflated citation records, bias in citation rates and crediting of multi-author papers. We present a step-by-step protocol to address these problems. By applying this protocol, reviewers can avoid most of the pitfalls associated with the pure numbers of indicators and achieve a fast but fair evaluation of a scientist's performance. We as ecologists should accept complexity not only in our research but also in our research evaluation and should encourage scientists of other disciplines to do so as well.  相似文献   

3.
Many fields face an increasing prevalence of multi-authorship, and this poses challenges in assessing citation metrics. Here, we explore multiple citation indicators that address total impact (number of citations, Hirsch H index [H]), co-authorship adjustment (Schreiber Hm index [Hm]), and author order (total citations to papers as single; single or first; or single, first, or last author). We demonstrate the correlation patterns between these indicators across 84,116 scientists (those among the top 30,000 for impact in a single year [2013] in at least one of these indicators) and separately across 12 scientific fields. Correlation patterns vary across these 12 fields. In physics, total citations are highly negatively correlated with indicators of co-authorship adjustment and of author order, while in other sciences the negative correlation is seen only for total citation impact and citations to papers as single author. We propose a composite score that sums standardized values of these six log-transformed indicators. Of the 1,000 top-ranked scientists with the composite score, only 322 are in the top 1,000 based on total citations. Many Nobel laureates and other extremely influential scientists rank among the top-1,000 with the composite indicator, but would rank much lower based on total citations. Conversely, many of the top 1,000 authors on total citations have had no single/first/last-authored cited paper. More Nobel laureates of 2011–2015 are among the top authors when authors are ranked by the composite score than by total citations, H index, or Hm index; 40/47 of these laureates are among the top 30,000 by at least one of the six indicators. We also explore the sensitivity of indicators to self-citation and alphabetic ordering of authors in papers across different scientific fields. Multiple indicators and their composite may give a more comprehensive picture of impact, although no citation indicator, single or composite, can be expected to select all the best scientists.  相似文献   

4.
Citation analysis numbers are being used as measures of faculty productivity and/or impact. However, after establishing initial citation numbers how does one conduct maintenance on their citation record? During this time of budgetary uncertainty it is more vital than ever to incorporate citation analysis into annual review packages or any departmental report that relies on such faculty data. The focus of this paper is to describe the next step in this process.  相似文献   

5.
Publication and citation decisions in ecology are likely influenced by many factors, potentially including journal impact factors, direction and magnitude of reported effects, and year of publication. Dissemination bias exists when publication or citation of a study depends on any of these factors. We defined several dissemination biases and determined their prevalence across many sub‐disciplines in ecology, then determined whether or not data quality also affected these biases. We identified dissemination biases in ecology by conducting a meta‐analysis of citation trends for 3867 studies included in 52 meta‐analyses. We correlated effect size, year of publication, impact factor and citation rate within each meta‐analysis. In addition, we explored how data quality as defined in meta‐analyses (sample size or variance) influenced each form of bias. We also explored how the direction of the predicted or observed effect, and the research field, influenced any biases. Year of publication did not influence citation rates. The first papers published in an area reported the strongest effects, and high impact factor journals published the most extreme effects. Effect size was more important than data quality for many publication and citation trends. Dissemination biases appear common in ecology, and although their magnitude was generally small many were associated with theory tenacity, evidenced as tendencies to cite papers that most strongly support our ideas. The consequences of this behavior are amplified by the fact that papers reporting strong effects were often of lower data quality than papers reporting much weaker effects. Furthermore, high impact factor journals published the strongest effects, generally in the absence of any correlation with data quality. Increasing awareness of the prevalence of theory tenacity, confirmation bias, and the inattention to data quality among ecologists is a first step towards reducing the impact of these biases on research in our field.  相似文献   

6.
Tomáš Grim 《Oikos》2008,117(4):484-487
Publication output is the standard by which scientific productivity is evaluated. Despite a plethora of papers on the issue of publication and citation biases, no study has so far considered a possible effect of social activities on publication output. One of the most frequent social activities in the world is drinking alcohol. In Europe, most alcohol is consumed as beer and, based on well known negative effects of alcohol consumption on cognitive performance, I predicted negative correlations between beer consumption and several measures of scientific performance. Using a survey from the Czech Republic, that has the highest per capita beer consumption rate in the world, I show that increasing per capita beer consumption is associated with lower numbers of papers, total citations, and citations per paper (a surrogate measure of paper quality). In addition I found the same predicted trends in comparison of two separate geographic areas within the Czech Republic that are also known to differ in beer consumption rates. These correlations are consistent with the possibility that leisure time social activities might influence the quality and quantity of scientific work and may be potential sources of publication and citation biases.  相似文献   

7.
For several decades, a leading paradigm of how to quantitatively assess scientific research has been the analysis of the aggregated citation information in a set of scientific publications. Although the representation of this information as a citation network has already been coined in the 1960s, it needed the systematic indexing of scientific literature to allow for impact metrics that actually made use of this network as a whole, improving on the then prevailing metrics that were almost exclusively based on the number of direct citations. However, besides focusing on the assignment of credit, the paper citation network can also be studied in terms of the proliferation of scientific ideas. Here we introduce a simple measure based on the shortest-paths in the paper''s in-component or, simply speaking, on the shape and size of the wake of a paper within the citation network. Applied to a citation network containing Physical Review publications from more than a century, our approach is able to detect seminal articles which have introduced concepts of obvious importance to the further development of physics. We observe a large fraction of papers co-authored by Nobel Prize laureates in physics among the top-ranked publications.  相似文献   

8.
How to quantify the impact of a researcher’s or an institution’s body of work is a matter of increasing importance to scientists, funding agencies, and hiring committees. The use of bibliometric indicators, such as the h-index or the Journal Impact Factor, have become widespread despite their known limitations. We argue that most existing bibliometric indicators are inconsistent, biased, and, worst of all, susceptible to manipulation. Here, we pursue a principled approach to the development of an indicator to quantify the scientific impact of both individual researchers and research institutions grounded on the functional form of the distribution of the asymptotic number of citations. We validate our approach using the publication records of 1,283 researchers from seven scientific and engineering disciplines and the chemistry departments at the 106 U.S. research institutions classified as “very high research activity”. Our approach has three distinct advantages. First, it accurately captures the overall scientific impact of researchers at all career stages, as measured by asymptotic citation counts. Second, unlike other measures, our indicator is resistant to manipulation and rewards publication quality over quantity. Third, our approach captures the time-evolution of the scientific impact of research institutions.  相似文献   

9.
This paper deals with the application of scientometric parameters in the evaluation of scientists, either as individuals or in small formal groups. The parameters are divided into two groups: parameters of scientific productivity and citation parameters. The scientific productivity was further subdivided into three types of parameters: (i) total productivity, (ii) partial productivity, and (iii) productivity in scientific fields and subfields. These citation parameters were considered: (i) impact factors of journals, (ii) impact factors of scientific fields and subfields, (iii) citations of individual papers, (iv) citations of individual authors, (v) expected citation rates and relative citation rates, and (vi) self-citations, independent citations and negative citations. Particular attention was payed to the time-dependence of the scientometric parameters. If available, numeric values of the world parameters were given and compared with the data about the scientific output of Croatian scientists.  相似文献   

10.
Ability to assess how solidly one is participating in their research arena is a metric that is of interest to many persons in academia. Such assessment is not easily defined, and differences exist in terms of which metric is the most accurate. In reality, no single production metric exists that is easy to determine and acceptable by the entire scientific community. Here we propose the SP-index to quantify the scientific production of researchers, representing the product of the annual citation number by the accumulated impact factors of the journals whereby the papers appeared, divided by the annual number of published papers. This article discusses such a productivity measure and lends support for the development of unified citation metrics for use by all participating in science research or teaching.  相似文献   

11.
The degree to which regions can produce desirable socioeconomic and environmental outcomes while consuming fewer resources and producing fewer undesirable outcomes can be viewed as measure of productivity. Economists have frequently used Malmquist Indices to evaluate intertemporal productivity changes of economic entities, such as firms and countries. We use Malmquist Indices to evaluate the predicted environmental performance of the rapidly growing Charlotte, NC, metropolitan area under alternative future land use scenarios. These scenarios project population, urban development, and environmental impacts from the base year 2000 to the year 2030 within the region's 184 watersheds. The first scenario is based on a continuation of current growth trends and patterns (“Business as Usual” or BAU). The second scenario uses compact “smart growth” development (“Compact Centers” or CC). We use data envelopment analysis (DEA) to estimate Malmquist Indices, which in this case, combine multiple variables into a single indicator that measures the relative impact of different development patterns on the consumption of natural resources. The results predict that the CC scenario maintains the region's current productivity, while the BAU scenario results in lower productivity. As watersheds in the study area are about the same size, weighting the results by area makes little difference. Watershed populations, however, vary greatly, and our results predict that watersheds with higher population densities also have higher Malmquist Index efficiencies. The model also predicts that low population watersheds will benefit more from the CC scenario. While the application of these analytical techniques in this case study is limited in scope, the results demonstrate that the Malmquist Index is a potentially powerful tool for interdisciplinary environmental impact analysis  相似文献   

12.
Evaluative bibliometrics uses advanced techniques to assess the impact of scholarly work in the context of other scientific work and usually compares the relative scientific contributions of research groups or institutions. Using publications from the National Institute of Allergy and Infectious Diseases (NIAID) HIV/AIDS extramural clinical trials networks, we assessed the presence, performance, and impact of papers published in 2006-2008. Through this approach, we sought to expand traditional bibliometric analyses beyond citation counts to include normative comparisons across journals and fields, visualization of co-authorship across the networks, and assess the inclusion of publications in reviews and syntheses. Specifically, we examined the research output of the networks in terms of the a) presence of papers in the scientific journal hierarchy ranked on the basis of journal influence measures, b) performance of publications on traditional bibliometric measures, and c) impact of publications in comparisons with similar publications worldwide, adjusted for journals and fields. We also examined collaboration and interdisciplinarity across the initiative, through network analysis and modeling of co-authorship patterns. Finally, we explored the uptake of network produced publications in research reviews and syntheses. Overall, the results suggest the networks are producing highly recognized work, engaging in extensive interdisciplinary collaborations, and having an impact across several areas of HIV-related science. The strengths and limitations of the approach for evaluation and monitoring research initiatives are discussed.  相似文献   

13.
This article analyses the effect of degree of interdisciplinarity on the citation impact of individual publications for four different scientific fields. We operationalise interdisciplinarity as disciplinary diversity in the references of a publication, and rather than treating interdisciplinarity as a monodimensional property, we investigate the separate effect of different aspects of diversity on citation impact: i.e. variety, balance and disparity. We use a Tobit regression model to examine the effect of these properties of interdisciplinarity on citation impact, controlling for a range of variables associated with the characteristics of publications. We find that variety has a positive effect on impact, whereas balance and disparity have a negative effect. Our results further qualify the separate effect of these three aspects of diversity by pointing out that all three dimensions of interdisciplinarity display a curvilinear (inverted U-shape) relationship with citation impact. These findings can be interpreted in two different ways. On the one hand, they are consistent with the view that, while combining multiple fields has a positive effect in knowledge creation, successful research is better achieved through research efforts that draw on a relatively proximal range of fields, as distal interdisciplinary research might be too risky and more likely to fail. On the other hand, these results may be interpreted as suggesting that scientific audiences are reluctant to cite heterodox papers that mix highly disparate bodies of knowledge—thus giving less credit to publications that are too groundbreaking or challenging.  相似文献   

14.

Background

In contrast to Newton''s well-known aphorism that he had been able “to see further only by standing on the shoulders of giants,” one attributes to the Spanish philosopher Ortega y Gasset the hypothesis saying that top-level research cannot be successful without a mass of medium researchers on which the top rests comparable to an iceberg.

Methodology/Principal Findings

The Ortega hypothesis predicts that highly-cited papers and medium-cited (or lowly-cited) papers would equally refer to papers with a medium impact. The Newton hypothesis would be supported if the top-level research more frequently cites previously highly-cited work than that medium-level research cites highly-cited work. Our analysis is based on (i) all articles and proceedings papers which were published in 2003 in the life sciences, health sciences, physical sciences, and social sciences, and (ii) all articles and proceeding papers which were cited within these publications. The results show that highly-cited work in all scientific fields more frequently cites previously highly-cited papers than that medium-cited work cites highly-cited work.

Conclusions/Significance

We demonstrate that papers contributing to the scientific progress in a field lean to a larger extent on previously important contributions than papers contributing little. These findings support the Newton hypothesis and call into question the Ortega hypothesis (given our usage of citation counts as a proxy for impact).  相似文献   

15.
The number of citations that papers receive has become significant in measuring researchers'' scientific productivity, and such measurements are important when one seeks career opportunities and research funding. Skewed citation practices can thus have profound effects on academic careers. We investigated (i) how frequently authors misinterpret original information and (ii) how frequently authors inappropriately cite reviews instead of the articles upon which the reviews are based. To reach this aim, we carried a survey of ecology journals indexed in the Web of Science and assessed the appropriateness of citations of review papers. Reviews were significantly more often cited than regular articles. In addition, 22% of citations were inaccurate, and another 15% unfairly gave credit to the review authors for other scientists'' ideas. These practices should be stopped, mainly through more open discussion among mentors, researchers and students.  相似文献   

16.
Recent studies suggest the necessity of understanding the interactive effects of predation and productivity on species coexistence and prey diversity. Models predict that coexistence of prey species with different competitive abilities can be achieved if inferior resource competitors are less susceptible to predation and if productivity and/or predation pressure are at intermediate levels. Hence, predator effects on prey diversity are predicted to be highly context dependent: enhancing diversity from low to intermediate levels of productivity or predation and reducing diversity of prey at high levels of productivity or predation. While several studies have examined the interactive effects of herbivory and productivity on primary producer diversity, experimental studies of such effects in predator‐prey systems are rare. We tested these predictions using an aquatic field mesocosm experiment in which initial density of the zooplankton predator Notonecta undulata and productivity were manipulated to test their interactive effects on diversity of seven zooplankton, cladoceran species that were common in surrounding ponds. Two productivity levels were imposed via phosphorus enrichment at levels comparable to low and intermediate levels found within neighboring natural ponds. We used open systems to allow for natural dispersal and behaviorally‐mediated numerical responses by the flight‐capable predator. Effects of predators on zooplankton diversity depended on productivity level. At low and high productivity, prey species richness declined while at high productivity it showed a unimodal relationship with increasing the predator density. Effects of treatments were weaker when using Pielou's evenness index or the inverse Simpson index as measures of prey diversity. Our findings are generally consistent with model predictions in which predators can facilitate prey coexistence and diversity at intermediate levels of productivity and predation intensity. Our work also shows that the functional form of the relationship between prey diversity and predation intensity can be complex and highly dependent on environmental context.  相似文献   

17.

Background

The analysis of co-authorship network aims at exploring the impact of network structure on the outcome of scientific collaborations and research publications. However, little is known about what network properties are associated with authors who have increased number of joint publications and are being cited highly.

Methodology/Principal Findings

Measures of social network analysis, for example network centrality and tie strength, have been utilized extensively in current co-authorship literature to explore different behavioural patterns of co-authorship networks. Using three SNA measures (i.e., degree centrality, closeness centrality and betweenness centrality), we explore scientific collaboration networks to understand factors influencing performance (i.e., citation count) and formation (tie strength between authors) of such networks. A citation count is the number of times an article is cited by other articles. We use co-authorship dataset of the research field of ‘steel structure’ for the year 2005 to 2009. To measure the strength of scientific collaboration between two authors, we consider the number of articles co-authored by them. In this study, we examine how citation count of a scientific publication is influenced by different centrality measures of its co-author(s) in a co-authorship network. We further analyze the impact of the network positions of authors on the strength of their scientific collaborations. We use both correlation and regression methods for data analysis leading to statistical validation. We identify that citation count of a research article is positively correlated with the degree centrality and betweenness centrality values of its co-author(s). Also, we reveal that degree centrality and betweenness centrality values of authors in a co-authorship network are positively correlated with the strength of their scientific collaborations.

Conclusions/Significance

Authors’ network positions in co-authorship networks influence the performance (i.e., citation count) and formation (i.e., tie strength) of scientific collaborations.  相似文献   

18.

Background

Citation analysis has become an important tool for research performance assessment in the medical sciences. However, different areas of medical research may have considerably different citation practices, even within the same medical field. Because of this, it is unclear to what extent citation-based bibliometric indicators allow for valid comparisons between research units active in different areas of medical research.

Methodology

A visualization methodology is introduced that reveals differences in citation practices between medical research areas. The methodology extracts terms from the titles and abstracts of a large collection of publications and uses these terms to visualize the structure of a medical field and to indicate how research areas within this field differ from each other in their average citation impact.

Results

Visualizations are provided for 32 medical fields, defined based on journal subject categories in the Web of Science database. The analysis focuses on three fields: Cardiac & cardiovascular systems, Clinical neurology, and Surgery. In each of these fields, there turn out to be large differences in citation practices between research areas. Low-impact research areas tend to focus on clinical intervention research, while high-impact research areas are often more oriented on basic and diagnostic research.

Conclusions

Popular bibliometric indicators, such as the h-index and the impact factor, do not correct for differences in citation practices between medical fields. These indicators therefore cannot be used to make accurate between-field comparisons. More sophisticated bibliometric indicators do correct for field differences but still fail to take into account within-field heterogeneity in citation practices. As a consequence, the citation impact of clinical intervention research may be substantially underestimated in comparison with basic and diagnostic research.  相似文献   

19.

Background

Publication records and citation indices often are used to evaluate academic performance. For this reason, obtaining or computing them accurately is important. This can be difficult, largely due to a lack of complete knowledge of an individual''s publication list and/or lack of time available to manually obtain or construct the publication-citation record. While online publication search engines have somewhat addressed these problems, using raw search results can yield inaccurate estimates of publication-citation records and citation indices.

Methodology

In this paper, we present a new, automated method that produces estimates of an individual''s publication-citation record from an individual''s name and a set of domain-specific vocabulary that may occur in the individual''s publication titles. Because this vocabulary can be harvested directly from a research web page or online (partial) publication list, our method delivers an easy way to obtain estimates of a publication-citation record and the relevant citation indices. Our method works by applying a series of stringent name and content filters to the raw publication search results returned by an online publication search engine. In this paper, our method is run using Google Scholar, but the underlying filters can be easily applied to any existing publication search engine. When compared against a manually constructed data set of individuals and their publication-citation records, our method provides significant improvements over raw search results. The estimated publication-citation records returned by our method have an average sensitivity of and specificity of (in contrast to raw search result specificity of less than 10%). When citation indices are computed using these records, the estimated indices are within of the true value, compared to raw search results which have overestimates of, on average, .

Conclusions

These results confirm that our method provides significantly improved estimates over raw search results, and these can either be used directly for large-scale (departmental or university) analysis or further refined manually to quickly give accurate publication-citation records.  相似文献   

20.
The lack of empirical support for the positive economic impact of information technology (IT) has been called the IT productivity paradox. Even though output measurement problems have often been held responsible for the paradox, we conjecture that modeling limitations in production-economics-based studies and input measurement also might have contributed to the paucity of systematic evidence regarding the impact of IT. We take the position that output measurement is slightly less problematic in manufacturing than in the service sector and that there is sound a priori rationale to expect substantial productivity gains from IT investments in manufacturing and production management. We revisit the IT productivity paradox to highlight some potential limitations of earlier research and obtain empirical support for these conjectures. We apply a theoretical framework involving explicit modeling of a strategic business unit's (SBU)1 input choices to a secondary data set in the manufacturing sector. A widely cited study by Loveman (1994) with the same dataset showed that the marginal contribution of IT to productivity was negative. However, our analysis reveals a significant positive impact of IT investment on SBU output. We show that Loveman's negative results can be attributed to the deflator used for the IT capital. Further, modeling issues such as a firm's choice of inputs like IT, non-IT, and labor lead to major differences in the IT productivity estimates. The question as to whether firms actually achieved economic benefits from IT investments in the past decade has been raised in the literature, and our results provide evidence of sizable productivity gains by large successful corporations in the manufacturing sector during the same time period.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号