首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper has two aims: (i) to introduce a novel method for measuring which part of overall citation inequality can be attributed to differences in citation practices across scientific fields, and (ii) to implement an empirical strategy for making meaningful comparisons between the number of citations received by articles in 22 broad fields. The number of citations received by any article is seen as a function of the article’s scientific influence, and the field to which it belongs. A key assumption is that articles in the same quantile of any field citation distribution have the same degree of citation impact in their respective field. Using a dataset of 4.4 million articles published in 1998–2003 with a five-year citation window, we estimate that differences in citation practices between the 22 fields account for 14% of overall citation inequality. Our empirical strategy is based on the strong similarities found in the behavior of citation distributions. We obtain three main results. Firstly, we estimate a set of average-based indicators, called exchange rates, to express the citations received by any article in a large interval in terms of the citations received in a reference situation. Secondly, using our exchange rates as normalization factors of the raw citation data reduces the effect of differences in citation practices to, approximately, 2% of overall citation inequality in the normalized citation distributions. Thirdly, we provide an empirical explanation of why the usual normalization procedure based on the fields’ mean citation rates is found to be equally successful.  相似文献   

2.
We tested the underlying assumption that citation counts are reliable predictors of future success, analyzing complete citation data on the careers of scientists. Our results show that i) among all citation indicators, the annual citations at the time of prediction is the best predictor of future citations, ii) future citations of a scientist''s published papers can be predicted accurately ( for a 1-year prediction, ) but iii) future citations of future work are hardly predictable.  相似文献   

3.
Many fields face an increasing prevalence of multi-authorship, and this poses challenges in assessing citation metrics. Here, we explore multiple citation indicators that address total impact (number of citations, Hirsch H index [H]), co-authorship adjustment (Schreiber Hm index [Hm]), and author order (total citations to papers as single; single or first; or single, first, or last author). We demonstrate the correlation patterns between these indicators across 84,116 scientists (those among the top 30,000 for impact in a single year [2013] in at least one of these indicators) and separately across 12 scientific fields. Correlation patterns vary across these 12 fields. In physics, total citations are highly negatively correlated with indicators of co-authorship adjustment and of author order, while in other sciences the negative correlation is seen only for total citation impact and citations to papers as single author. We propose a composite score that sums standardized values of these six log-transformed indicators. Of the 1,000 top-ranked scientists with the composite score, only 322 are in the top 1,000 based on total citations. Many Nobel laureates and other extremely influential scientists rank among the top-1,000 with the composite indicator, but would rank much lower based on total citations. Conversely, many of the top 1,000 authors on total citations have had no single/first/last-authored cited paper. More Nobel laureates of 2011–2015 are among the top authors when authors are ranked by the composite score than by total citations, H index, or Hm index; 40/47 of these laureates are among the top 30,000 by at least one of the six indicators. We also explore the sensitivity of indicators to self-citation and alphabetic ordering of authors in papers across different scientific fields. Multiple indicators and their composite may give a more comprehensive picture of impact, although no citation indicator, single or composite, can be expected to select all the best scientists.  相似文献   

4.
Quantifying and comparing the scientific output of researchers has become critical for governments, funding agencies and universities. Comparison by reputation and direct assessment of contributions to the field is no longer possible, as the number of scientists increases and traditional definitions about scientific fields become blurred. The h-index is often used for comparing scientists, but has several well-documented shortcomings. In this paper, we introduce a new index for measuring and comparing the publication records of scientists: the pagerank-index (symbolised as π). The index uses a version of pagerank algorithm and the citation networks of papers in its computation, and is fundamentally different from the existing variants of h-index because it considers not only the number of citations but also the actual impact of each citation. We adapt two approaches to demonstrate the utility of the new index. Firstly, we use a simulation model of a community of authors, whereby we create various ‘groups’ of authors which are different from each other in inherent publication habits, to show that the pagerank-index is fairer than the existing indices in three distinct scenarios: (i) when authors try to ‘massage’ their index by publishing papers in low-quality outlets primarily to self-cite other papers (ii) when authors collaborate in large groups in order to obtain more authorships (iii) when authors spend most of their time in producing genuine but low quality publications that would massage their index. Secondly, we undertake two real world case studies: (i) the evolving author community of quantum game theory, as defined by Google Scholar (ii) a snapshot of the high energy physics (HEP) theory research community in arXiv. In both case studies, we find that the list of top authors vary very significantly when h-index and pagerank-index are used for comparison. We show that in both cases, authors who have collaborated in large groups and/or published less impactful papers tend to be comparatively favoured by the h-index, whereas the pagerank-index highlights authors who have made a relatively small number of definitive contributions, or written papers which served to highlight the link between diverse disciplines, or typically worked in smaller groups. Thus, we argue that the pagerank-index is an inherently fairer and more nuanced metric to quantify the publication records of scientists compared to existing measures.  相似文献   

5.
Measuring co-authorship and networking-adjusted scientific impact   总被引:1,自引:0,他引:1  
Ioannidis JP 《PloS one》2008,3(7):e2778
Appraisal of the scientific impact of researchers, teams and institutions with productivity and citation metrics has major repercussions. Funding and promotion of individuals and survival of teams and institutions depend on publications and citations. In this competitive environment, the number of authors per paper is increasing and apparently some co-authors don't satisfy authorship criteria. Listing of individual contributions is still sporadic and also open to manipulation. Metrics are needed to measure the networking intensity for a single scientist or group of scientists accounting for patterns of co-authorship. Here, I define I(1) for a single scientist as the number of authors who appear in at least I(1) papers of the specific scientist. For a group of scientists or institution, I(n) is defined as the number of authors who appear in at least I(n) papers that bear the affiliation of the group or institution. I(1) depends on the number of papers authored N(p). The power exponent R of the relationship between I(1) and N(p) categorizes scientists as solitary (R>2.5), nuclear (R = 2.25-2.5), networked (R = 2-2.25), extensively networked (R = 1.75-2) or collaborators (R<1.75). R may be used to adjust for co-authorship networking the citation impact of a scientist. I(n) similarly provides a simple measure of the effective networking size to adjust the citation impact of groups or institutions. Empirical data are provided for single scientists and institutions for the proposed metrics. Cautious adoption of adjustments for co-authorship and networking in scientific appraisals may offer incentives for more accountable co-authorship behaviour in published articles.  相似文献   

6.
During the past 28 years, the journal "Collegium Antropologicum" has continuously served as one of the main disseminators of anthropological scientific production in Central and Eastern Europe. The journal was committed to its role of a multidisciplinary platform for presenting wide range of research topics relevant to anthropology, from investigations within social and cultural anthropology and archaeology to those covering contemporary population genetics, human evolution and biomedical issues. Two key strategies aimed at sustaining and increasing the impact of this journal were oriented towards: (i) identification of promising local groups of researchers who were at disadvantage by many aspects (e.g. educational curricula, financial supports, language barriers etc.) when trying to publish their research internationally, and (ii) invitation and encouragement of already established international scientists to make contributions for "Collegium Antropologicum". From 1980-2000, 89 articles (or 6.3% of all published papers during that period) were cited 6 or more times, contributing disproportionately to journal's impact (nearly a third of all citations received). In an attempt to identify such papers more readily among the submissions to the journal in the future, we analyzed research topics and affiliations of the authors among the 89 papers receiving most citations in comparison to all papers published. Among the papers most frequently cited, we found greater-than-expected prevalence of Croatian researchers (especially when publishing in collaboration with international scientists) and studies of special populations. Several papers received more than 25 citations or had overall citation intensity greater than 2 per year. This implies that an interesting article from a local group of researchers can still resonate with international audience although published in a regional journal. Present analysis supports current editorial strategy that with a help of the international consulting editorial board continuously improves international recognition of this journal. The results imply that a balanced encouragement to promising local groups of researchers and to contributions of already established international scientists is a strategy superior to others in maintaining and increasing the impact of this regional journal.  相似文献   

7.
8.
The number of citations that papers receive has become significant in measuring researchers'' scientific productivity, and such measurements are important when one seeks career opportunities and research funding. Skewed citation practices can thus have profound effects on academic careers. We investigated (i) how frequently authors misinterpret original information and (ii) how frequently authors inappropriately cite reviews instead of the articles upon which the reviews are based. To reach this aim, we carried a survey of ecology journals indexed in the Web of Science and assessed the appropriateness of citations of review papers. Reviews were significantly more often cited than regular articles. In addition, 22% of citations were inaccurate, and another 15% unfairly gave credit to the review authors for other scientists'' ideas. These practices should be stopped, mainly through more open discussion among mentors, researchers and students.  相似文献   

9.
In this paper, we assess the bibliometric parameters of 37 Dutch professors in clinical cardiology. Those are the Hirsch index (h-index) based on all papers, the h-index based on first authored papers, the number of papers, the number of citations and the citations per paper. A top 10 for each of the five parameters was compiled. In theory, the same 10 professors might appear in each of these top 10s. Alternatively, each of the 37 professors under assessment could appear one or more times. In practice, we found 22 out of these 37 professors in the 5 top 10s. Thus, there is no golden parameter. In addition, there is too much inhomogeneity in citation characteristics even within a relatively homogeneous group of clinical cardiologists. Therefore, citation analysis should be applied with great care in science policy. This is even more important when different fields of medicine are compared in university medical centres. It may be possible to develop better parameters in the future, but the present ones are simply not good enough. Also, we observed a quite remarkable explosion of publications per author which can, paradoxical as it may sound, probably not be interpreted as an increase in productivity of scientists, but as the effect of an increase in the number of co-authors and the strategic effect of networks.  相似文献   

10.
Nobel Prizes are commonly seen to be among the most prestigious achievements of our times. Based on mining several million citations, we quantitatively analyze the processes driving paradigm shifts in science. We find that groundbreaking discoveries of Nobel Prize Laureates and other famous scientists are not only acknowledged by many citations of their landmark papers. Surprisingly, they also boost the citation rates of their previous publications. Given that innovations must outcompete the rich-gets-richer effect for scientific citations, it turns out that they can make their way only through citation cascades. A quantitative analysis reveals how and why they happen. Science appears to behave like a self-organized critical system, in which citation cascades of all sizes occur, from continuous scientific progress all the way up to scientific revolutions, which change the way we see our world. Measuring the "boosting effect" of landmark papers, our analysis reveals how new ideas and new players can make their way and finally triumph in a world dominated by established paradigms. The underlying "boost factor" is also useful to discover scientific breakthroughs and talents much earlier than through classical citation analysis, which by now has become a widespread method to measure scientific excellence, influencing scientific careers and the distribution of research funds. Our findings reveal patterns of collective social behavior, which are also interesting from an attention economics perspective. Understanding the origin of scientific authority may therefore ultimately help to explain how social influence comes about and why the value of goods depends so strongly on the attention they attract.  相似文献   

11.
This article analyses the effect of degree of interdisciplinarity on the citation impact of individual publications for four different scientific fields. We operationalise interdisciplinarity as disciplinary diversity in the references of a publication, and rather than treating interdisciplinarity as a monodimensional property, we investigate the separate effect of different aspects of diversity on citation impact: i.e. variety, balance and disparity. We use a Tobit regression model to examine the effect of these properties of interdisciplinarity on citation impact, controlling for a range of variables associated with the characteristics of publications. We find that variety has a positive effect on impact, whereas balance and disparity have a negative effect. Our results further qualify the separate effect of these three aspects of diversity by pointing out that all three dimensions of interdisciplinarity display a curvilinear (inverted U-shape) relationship with citation impact. These findings can be interpreted in two different ways. On the one hand, they are consistent with the view that, while combining multiple fields has a positive effect in knowledge creation, successful research is better achieved through research efforts that draw on a relatively proximal range of fields, as distal interdisciplinary research might be too risky and more likely to fail. On the other hand, these results may be interpreted as suggesting that scientific audiences are reluctant to cite heterodox papers that mix highly disparate bodies of knowledge—thus giving less credit to publications that are too groundbreaking or challenging.  相似文献   

12.
The most highly cited ecologists and environmental scientists provide both a benchmark and unique opportunity to consider the importance of research funding. Here, we use citation data and self‐reported funding levels to assess the relative importance of various factors in shaping productivity and potential impact. The elite were senior Americans, well funded, with large labs. In contrast to Canadian NSERC grant holders (not in the top 1%), citations per paper did not increase with higher levels of funding within the ecological elite. We propose that this is good news for several reasons. It suggests that the publications generated by the top ecologists and environmental scientists are subject to limitations, that higher volume of publications is always important, and that increased funding to ecologists in general can shift our discipline to wider research networks. As expected, collaboration was identified as an important factor for the elite, and hopefully, this serves as a positive incentive to funding agencies since it increases the visibility of their research.  相似文献   

13.
Why are some scientific disciplines, such as sociology and psychology, more fragmented into conflicting schools of thought than other fields, such as physics and biology? Furthermore, why does high fragmentation tend to coincide with limited scientific progress? We analyzed a formal model where scientists seek to identify the correct answer to a research question. Each scientist is influenced by three forces: (i) signals received from the correct answer to the question; (ii) peer influence; and (iii) noise. We observed the emergence of different macroscopic patterns of collective exploration, and studied how the three forces affect the degree to which disciplines fall apart into divergent fragments, or so-called “schools of thought”. We conducted two simulation experiments where we tested (A) whether the three forces foster or hamper progress, and (B) whether disciplinary fragmentation causally affects scientific progress and vice versa. We found that fragmentation critically limits scientific progress. Strikingly, there is no effect in the opposite causal direction. What is more, our results shows that at the heart of the mechanisms driving scientific progress we find (i) social interactions, and (ii) peer disagreement. In fact, fragmentation is increased and progress limited if the simulated scientists are open to influence only by peers with very similar views, or when within-school diversity is lost. Finally, disciplines where the scientists received strong signals from the correct answer were less fragmented and experienced faster progress. We discuss model’s implications for the design of social institutions fostering interdisciplinarity and participation in science.  相似文献   

14.
Interdisciplinary research is increasingly recognized as the solution to today’s challenging scientific and societal problems, but the relationship between interdisciplinary research and scientific impact is still unclear. This paper studies the association between the degree of interdisciplinarity and the number of citations at the paper level. Different from previous studies compositing various aspects of interdisciplinarity into a single indicator, we use factor analysis to uncover distinct dimensions of interdisciplinarity corresponding to variety, balance, and disparity. We estimate Poisson models with journal fixed effects and robust standard errors to analyze the divergent relationships between these three factors and citations. We find that long-term (13-year) citations (1) increase at an increasing rate with variety, (2) decrease with balance, and (3) increase at a decreasing rate with disparity. Furthermore, interdisciplinarity also affects the process of citation accumulation: (1) although variety and disparity have positive effects on long-term citations, they have negative effects on short-term (3-year) citations, and (2) although balance has a negative effect on long-term citations, its negative effect is insignificant in the short run. These findings have important implications for interdisciplinary research and science policy.  相似文献   

15.
Tomáš Grim 《Oikos》2008,117(4):484-487
Publication output is the standard by which scientific productivity is evaluated. Despite a plethora of papers on the issue of publication and citation biases, no study has so far considered a possible effect of social activities on publication output. One of the most frequent social activities in the world is drinking alcohol. In Europe, most alcohol is consumed as beer and, based on well known negative effects of alcohol consumption on cognitive performance, I predicted negative correlations between beer consumption and several measures of scientific performance. Using a survey from the Czech Republic, that has the highest per capita beer consumption rate in the world, I show that increasing per capita beer consumption is associated with lower numbers of papers, total citations, and citations per paper (a surrogate measure of paper quality). In addition I found the same predicted trends in comparison of two separate geographic areas within the Czech Republic that are also known to differ in beer consumption rates. These correlations are consistent with the possibility that leisure time social activities might influence the quality and quantity of scientific work and may be potential sources of publication and citation biases.  相似文献   

16.
We propose a new method to assess the merit of any set of scientific papers in a given field based on the citations they receive. Given a field and a citation impact indicator, such as the mean citation or the -index, the merit of a given set of articles is identified with the probability that a randomly drawn set of articles from a given pool of articles in that field has a lower citation impact according to the indicator in question. The method allows for comparisons between sets of articles of different sizes and fields. Using a dataset acquired from Thomson Scientific that contains the articles published in the periodical literature in the period 1998–2007, we show that the novel approach yields rankings of research units different from those obtained by a direct application of the mean citation or the -index.  相似文献   

17.
How to quantify the impact of a researcher’s or an institution’s body of work is a matter of increasing importance to scientists, funding agencies, and hiring committees. The use of bibliometric indicators, such as the h-index or the Journal Impact Factor, have become widespread despite their known limitations. We argue that most existing bibliometric indicators are inconsistent, biased, and, worst of all, susceptible to manipulation. Here, we pursue a principled approach to the development of an indicator to quantify the scientific impact of both individual researchers and research institutions grounded on the functional form of the distribution of the asymptotic number of citations. We validate our approach using the publication records of 1,283 researchers from seven scientific and engineering disciplines and the chemistry departments at the 106 U.S. research institutions classified as “very high research activity”. Our approach has three distinct advantages. First, it accurately captures the overall scientific impact of researchers at all career stages, as measured by asymptotic citation counts. Second, unlike other measures, our indicator is resistant to manipulation and rewards publication quality over quantity. Third, our approach captures the time-evolution of the scientific impact of research institutions.  相似文献   

18.
Author‐level metrics are a widely used measure of scientific success. The h‐index and its variants measure publication output (number of publications) and research impact (number of citations). They are often used to influence decisions, such as allocating funding or jobs. Here, we argue that the emphasis on publication output and impact hinders scientific progress in the fields of ecology and evolution because it disincentivizes two fundamental practices: generating impactful (and therefore often long‐term) datasets and sharing data. We describe a new author‐level metric, the data‐index, which values both dataset output (number of datasets) and impact (number of data‐index citations), so promotes generating and sharing data as a result. We discuss how it could be implemented and provide user guidelines. The data‐index is designed to complement other metrics of scientific success, as scientific contributions are diverse and our value system should reflect that both for the benefit of scientific progress and to create a value system that is more equitable, diverse, and inclusive. Future work should focus on promoting other scientific contributions, such as communicating science, informing policy, mentoring other scientists, and providing open‐access code and tools.  相似文献   

19.
The assessment of scientific publications is an integral part of the scientific process. Here we investigate three methods of assessing the merit of a scientific paper: subjective post-publication peer review, the number of citations gained by a paper, and the impact factor of the journal in which the article was published. We investigate these methods using two datasets in which subjective post-publication assessments of scientific publications have been made by experts. We find that there are moderate, but statistically significant, correlations between assessor scores, when two assessors have rated the same paper, and between assessor score and the number of citations a paper accrues. However, we show that assessor score depends strongly on the journal in which the paper is published, and that assessors tend to over-rate papers published in journals with high impact factors. If we control for this bias, we find that the correlation between assessor scores and between assessor score and the number of citations is weak, suggesting that scientists have little ability to judge either the intrinsic merit of a paper or its likely impact. We also show that the number of citations a paper receives is an extremely error-prone measure of scientific merit. Finally, we argue that the impact factor is likely to be a poor measure of merit, since it depends on subjective assessment. We conclude that the three measures of scientific merit considered here are poor; in particular subjective assessments are an error-prone, biased, and expensive method by which to assess merit. We argue that the impact factor may be the most satisfactory of the methods we have considered, since it is a form of pre-publication review. However, we emphasise that it is likely to be a very error-prone measure of merit that is qualitative, not quantitative.

Author summary

Subjective assessments of the merit and likely impact of scientific publications are routinely made by scientists during their own research, and as part of promotion, appointment, and government committees. Using two large datasets in which scientists have made qualitative assessments of scientific merit, we show that scientists are poor at judging scientific merit and the likely impact of a paper, and that their judgment is strongly influenced by the journal in which the paper is published. We also demonstrate that the number of citations a paper accumulates is a poor measure of merit and we argue that although it is likely to be poor, the impact factor, of the journal in which a paper is published, may be the best measure of scientific merit currently available.  相似文献   

20.
Adaptations to overcrowding of individual plants result in density dependant control of growth and development. There is little information on how anthropogenic stresses modify these responses. We investigated whether combinations of diclofop‐methyl herbicide and tropospheric ozone alter the pattern of expected growth compensation with density changes resulting from intraspecific competition in Lolium multiforum Lam (Poacea) plants. Individual plant vegetative parameters and total seed production were assessed for plants growing under various densities and different herbicide rates and ozone treatments. The stressors differently changed the frequency distribution for average individual plant weight resulting from increasing densities. Only herbicide affected seedling mortality. Plants were able to compensate during grain filling maintaining similar seed production – density relationships in all treatments. Our findings contribute to the understanding of the impact of stress factors on the demographic changes in plant populations. Important ecological implications arise: (i) contrasting responses to ozone and herbicide, alone and in combination of individual plants resulted in different biomass – density relationships; (ii) stress effects on plant populations could not be predicted from individual responses; and (iii) changes in competitive outcome by single or combined stress factors may alter the expected genotype frequency in a crowded population with few dominant individuals.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号