首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Do citations accumulate too slowly in the social sciences to be used to assess the quality of recent articles? I investigate whether this is the case using citation data for all articles in economics and political science published in 2006 and indexed in the Web of Science. I find that citations in the first two years after publication explain more than half of the variation in cumulative citations received over a longer period. Journal impact factors improve the correlation between the predicted and actual future ranks of journal articles when using citation data from 2006 alone but the effect declines sharply thereafter. Finally, more than half of the papers in the top 20% in 2012 were already in the top 20% in the year of publication (2006).  相似文献   

2.
In the era of social media there are now many different ways that a scientist can build their public profile; the publication of high-quality scientific papers being just one. While social media is a valuable tool for outreach and the sharing of ideas, there is a danger that this form of communication is gaining too high a value and that we are losing sight of key metrics of scientific value, such as citation indices. To help quantify this, I propose the ‘Kardashian Index’, a measure of discrepancy between a scientist’s social media profile and publication record based on the direct comparison of numbers of citations and Twitter followers.  相似文献   

3.
This paper deals with the application of scientometric parameters in the evaluation of scientists, either as individuals or in small formal groups. The parameters are divided into two groups: parameters of scientific productivity and citation parameters. The scientific productivity was further subdivided into three types of parameters: (i) total productivity, (ii) partial productivity, and (iii) productivity in scientific fields and subfields. These citation parameters were considered: (i) impact factors of journals, (ii) impact factors of scientific fields and subfields, (iii) citations of individual papers, (iv) citations of individual authors, (v) expected citation rates and relative citation rates, and (vi) self-citations, independent citations and negative citations. Particular attention was payed to the time-dependence of the scientometric parameters. If available, numeric values of the world parameters were given and compared with the data about the scientific output of Croatian scientists.  相似文献   

4.
International collaboration is becoming increasingly important for the advancement of science. To gain a more precise understanding of how factors such as international collaboration influence publication success, we divide publication success into two categories: journal placement and citation performance. Analyzing all papers published between 1996 and 2012 in eight disciplines, we find that those with more countries in their affiliations performed better in both categories. Furthermore, specific countries vary in their effects both individually and in combination. Finally, we look at the relationship between national output (in papers published) and input (in citations received) over the 17 years, expanding upon prior depictions by also plotting an expected proportion of citations based on Journal Placement. Discrepancies between this expectation and the realized proportion of citations illuminate trends in performance, such as the decline of the Global North in response to rapidly developing countries, especially China. Yet, most countries'' show little to no discrepancy, meaning that, in most cases, citation proportion can be predicted by Journal Placement alone. This reveals an extreme asymmetry between the opinions of a few reviewers and the degree to which paper acceptance and citation rates influence career advancement.  相似文献   

5.
Nobel Prizes are commonly seen to be among the most prestigious achievements of our times. Based on mining several million citations, we quantitatively analyze the processes driving paradigm shifts in science. We find that groundbreaking discoveries of Nobel Prize Laureates and other famous scientists are not only acknowledged by many citations of their landmark papers. Surprisingly, they also boost the citation rates of their previous publications. Given that innovations must outcompete the rich-gets-richer effect for scientific citations, it turns out that they can make their way only through citation cascades. A quantitative analysis reveals how and why they happen. Science appears to behave like a self-organized critical system, in which citation cascades of all sizes occur, from continuous scientific progress all the way up to scientific revolutions, which change the way we see our world. Measuring the "boosting effect" of landmark papers, our analysis reveals how new ideas and new players can make their way and finally triumph in a world dominated by established paradigms. The underlying "boost factor" is also useful to discover scientific breakthroughs and talents much earlier than through classical citation analysis, which by now has become a widespread method to measure scientific excellence, influencing scientific careers and the distribution of research funds. Our findings reveal patterns of collective social behavior, which are also interesting from an attention economics perspective. Understanding the origin of scientific authority may therefore ultimately help to explain how social influence comes about and why the value of goods depends so strongly on the attention they attract.  相似文献   

6.
Publication and citation decisions in ecology are likely influenced by many factors, potentially including journal impact factors, direction and magnitude of reported effects, and year of publication. Dissemination bias exists when publication or citation of a study depends on any of these factors. We defined several dissemination biases and determined their prevalence across many sub‐disciplines in ecology, then determined whether or not data quality also affected these biases. We identified dissemination biases in ecology by conducting a meta‐analysis of citation trends for 3867 studies included in 52 meta‐analyses. We correlated effect size, year of publication, impact factor and citation rate within each meta‐analysis. In addition, we explored how data quality as defined in meta‐analyses (sample size or variance) influenced each form of bias. We also explored how the direction of the predicted or observed effect, and the research field, influenced any biases. Year of publication did not influence citation rates. The first papers published in an area reported the strongest effects, and high impact factor journals published the most extreme effects. Effect size was more important than data quality for many publication and citation trends. Dissemination biases appear common in ecology, and although their magnitude was generally small many were associated with theory tenacity, evidenced as tendencies to cite papers that most strongly support our ideas. The consequences of this behavior are amplified by the fact that papers reporting strong effects were often of lower data quality than papers reporting much weaker effects. Furthermore, high impact factor journals published the strongest effects, generally in the absence of any correlation with data quality. Increasing awareness of the prevalence of theory tenacity, confirmation bias, and the inattention to data quality among ecologists is a first step towards reducing the impact of these biases on research in our field.  相似文献   

7.
The large amount of information contained in bibliographic databases has recently boosted the use of citations, and other indicators based on citation numbers, as tools for the quantitative assessment of scientific research. Citations counts are often interpreted as proxies for the scientific influence of papers, journals, scholars, and institutions. However, a rigorous and scientifically grounded methodology for a correct use of citation counts is still missing. In particular, cross-disciplinary comparisons in terms of raw citation counts systematically favors scientific disciplines with higher citation and publication rates. Here we perform an exhaustive study of the citation patterns of millions of papers, and derive a simple transformation of citation counts able to suppress the disproportionate citation counts among scientific domains. We find that the transformation is well described by a power-law function, and that the parameter values of the transformation are typical features of each scientific discipline. Universal properties of citation patterns descend therefore from the fact that citation distributions for papers in a specific field are all part of the same family of univariate distributions.  相似文献   

8.

Objectives

This study aimed to compare the impact of Gross Domestic Product (GDP) per capita, spending on Research and Development (R&D), number of universities, and Indexed Scientific Journals on total number of research documents (papers), citations per document and Hirsch index (H-index) in various science and social science subjects among Asian countries.

Materials and Methods

In this study, 40 Asian countries were included. The information regarding Asian countries, their GDP per capita, spending on R&D, total number of universities and indexed scientific journals were collected. We recorded the bibliometric indicators, including total number of research documents, citations per document and H-index in various science and social sciences subjects during the period 1996–2011. The main sources for information were World Bank, SCI-mago/Scopus and Web of Science; Thomson Reuters.

Results

The mean per capita GDP for all the Asian countries is 14448.31±2854.40 US$, yearly per capita spending on R&D 0.64±0.16 US$, number of universities 72.37±18.32 and mean number of ISI indexed journal per country is 17.97±7.35. The mean of research documents published in various science and social science subjects among all the Asian countries during the period 1996–2011 is 158086.92±69204.09; citations per document 8.67±0.48; and H-index 122.8±19.21. Spending on R&D, number of universities and indexed journals have a positive correlation with number of published documents, citations per document and H-index in various science and social science subjects. However, there was no association between the per capita GDP and research outcomes.

Conclusion

The Asian countries who spend more on R&D have a large number of universities and scientific indexed journals produced more in research outcomes including total number of research publication, citations per documents and H-index in various science and social science subjects.  相似文献   

9.
10.

Background

Although being a simple and effective index that has been widely used to evaluate academic output of scientists, the h-index suffers from drawbacks. One critical disadvantage is that only h-squared citations can be inferred from the h-index, which completely ignores excess and h-tail citations, leading to unfair and inaccurate evaluations in many cases.

Methodology /Principal Findings

To solve this problem, I propose the h’-index, in which h-squared, excess and h-tail citations are all considered. Based on the citation data of the 100 most prolific economists, comparing to h-index, the h’-index shows better correlation with indices of total-citation number and citations per publication, which, although relatively reliable and widely used, do not carry the information of the citation distribution. In contrast, the h’-index possesses the ability to discriminate the shapes of citation distributions, thus leading to more accurate evaluation.

Conclusions /Significance

The h’-index improves the h-index, as well as indices of total-citation number and citations per publication, by possessing the ability to discriminate shapes of citation distribution, thus making the h’-index a better single-number index for evaluating scientific output in a way that is fairer and more reasonable.  相似文献   

11.
This study analyzes funding acknowledgments in scientific papers to investigate relationships between research sponsorship and publication impacts. We identify acknowledgments to research sponsors for nanotechnology papers published in the Web of Science during a one-year sample period. We examine the citations accrued by these papers and the journal impact factors of their publication titles. The results show that publications from grant sponsored research exhibit higher impacts in terms of both journal ranking and citation counts than research that is not grant sponsored. We discuss the method and models used, and the insights provided by this approach as well as it limitations.  相似文献   

12.
A number of new metrics based on social media platforms—grouped under the term “altmetrics”—have recently been introduced as potential indicators of research impact. Despite their current popularity, there is a lack of information regarding the determinants of these metrics. Using publication and citation data from 1.3 million papers published in 2012 and covered in Thomson Reuters’ Web of Science as well as social media counts from Altmetric.com, this paper analyses the main patterns of five social media metrics as a function of document characteristics (i.e., discipline, document type, title length, number of pages and references) and collaborative practices and compares them to patterns known for citations. Results show that the presence of papers on social media is low, with 21.5% of papers receiving at least one tweet, 4.7% being shared on Facebook, 1.9% mentioned on blogs, 0.8% found on Google+ and 0.7% discussed in mainstream media. By contrast, 66.8% of papers have received at least one citation. Our findings show that both citations and social media metrics increase with the extent of collaboration and the length of the references list. On the other hand, while editorials and news items are seldom cited, it is these types of document that are the most popular on Twitter. Similarly, while longer papers typically attract more citations, an opposite trend is seen on social media platforms. Finally, contrary to what is observed for citations, it is papers in the Social Sciences and humanities that are the most often found on social media platforms. On the whole, these findings suggest that factors driving social media and citations are different. Therefore, social media metrics cannot actually be seen as alternatives to citations; at most, they may function as complements to other type of indicators.  相似文献   

13.

Background

Systematic reviews of the literature occupy the highest position in currently proposed hierarchies of evidence. The aims of this study were to assess whether citation classics exist in published systematic review and meta-analysis (SRM), examine the characteristics of the most frequently cited SRM articles, and evaluate the contribution of different world regions.

Methods

The 100 most cited SRM were identified in October 2012 using the Science Citation Index database of the Institute for Scientific Information. Data were extracted by one author. Spearman’s correlation was used to assess the association between years since publication, numbers of authors, article length, journal impact factor, and average citations per year.

Results

Among the 100 citation classics, published between 1977 and 2008, the most cited article received 7308 citations and the least-cited 675 citations. The average citations per year ranged from 27.8 to 401.6. First authors from the USA produced the highest number of citation classics (n=46), followed by the UK (n=28) and Canada (n=15). The 100 articles were published in 42 journals led by the Journal of the American Medical Association (n=18), followed by the British Medical Journal (n=14) and The Lancet (n=13). There was a statistically significant positive correlation between number of authors (Spearman’s rho=0.320, p=0.001), journal impact factor (rho=0.240, p=0.016) and average citations per year. There was a statistically significant negative correlation between average citations per year and year since publication (rho = -0.636, p=0.0001). The most cited papers identified seminal contributions and originators of landmark methodological aspects of SRM and reflect major advances in the management of and predisposing factors for chronic diseases.

Conclusions

Since the late 1970s, the USA, UK, and Canada have taken leadership in the production of citation classic papers. No first author from low or middle-income countries (LMIC) led one of the most cited 100 SRM.  相似文献   

14.
In genetic epidemiology, genome-wide association studies (GWAS) are used to rapidly scan a large set of genetic variants and thus to identify associations with a particular trait or disease. The GWAS philosophy is different to that of conventional candidate-gene-based approaches, which directly test the effects of genetic variants of potentially contributory genes in an association study. One controversial question is whether GWAS provide relevant scientific outcomes by comparison with candidate-gene studies. We thus performed a bibliometric study using two citation metrics to assess whether the GWAS have contributed a capital gain in knowledge discovery by comparison with candidate-gene approaches. We selected GWAS published between 2005 and 2009 and matched them with candidate-gene studies on the same topic and published in the same period of time. We observed that the GWAS papers have received, on average, 30±55 citations more than the candidate gene papers, 1 year after their publication date, and 39±58 citations more 2 years after their publication date. The GWAS papers were, on average, 2.8±2.4 and 2.9±2.4 times more cited than expected, 1 and 2 years after their publication date; whereas the candidate gene papers were 1.5±1.2 and 1.5±1.4 times more cited than expected. While the evaluation of the contribution to scientific research through citation metrics may be challenged, it cannot be denied that GWAS are great hypothesis generators, and are a powerful complement to candidate gene studies.  相似文献   

15.
We tested the underlying assumption that citation counts are reliable predictors of future success, analyzing complete citation data on the careers of scientists. Our results show that i) among all citation indicators, the annual citations at the time of prediction is the best predictor of future citations, ii) future citations of a scientist''s published papers can be predicted accurately ( for a 1-year prediction, ) but iii) future citations of future work are hardly predictable.  相似文献   

16.
Research activity related to different aspects of prevention, prediction, diagnosis and treatment of brain metastases has increased during recent years. One of the major databases (Scopus) contains 942 scientific articles that were published during the 5-year time period 2006–2010. Of these, 195 (21%) reported on single patient cases and 12 (1%) were reports of 2 cases. Little is known about their influence on advancement of the field or scientific merits. Do brain metastases case reports attract attention and provide stimuli for further research or do they go largely unrecognized? Different measures of impact, visibility and quality of published research are available, each with its own pros and cons. For the present evaluation, article citation rate was chosen. The median number of citations overall and stratified by year of publication was 0, except for the year 2006 when it was 2. As compared to other articles, case reports remained more often without citation (p<0.05 except for 2006 data). All case reports with 10 or more citations (n = 6) reported on newly introduced anticancer drugs, which commonly are prescribed to treat extracranial metastases, and the responses observed in single patients with brain metastases. Average annual numbers of citations were also calculated. The articles with most citations per year were the same six case reports mentioned above (the only ones that obtained more than 2.0 citations per year). Citations appeared to gradually increase during the first two years after publication but remained on a generally low or modest level. It cannot be excluded that case reports without citation provide interesting information to some clinicians or researchers. Apparently, case reports describing unexpected therapeutic success gain more attention, at least in terms of citation, than others.  相似文献   

17.
Measuring co-authorship and networking-adjusted scientific impact   总被引:1,自引:0,他引:1  
Ioannidis JP 《PloS one》2008,3(7):e2778
Appraisal of the scientific impact of researchers, teams and institutions with productivity and citation metrics has major repercussions. Funding and promotion of individuals and survival of teams and institutions depend on publications and citations. In this competitive environment, the number of authors per paper is increasing and apparently some co-authors don't satisfy authorship criteria. Listing of individual contributions is still sporadic and also open to manipulation. Metrics are needed to measure the networking intensity for a single scientist or group of scientists accounting for patterns of co-authorship. Here, I define I(1) for a single scientist as the number of authors who appear in at least I(1) papers of the specific scientist. For a group of scientists or institution, I(n) is defined as the number of authors who appear in at least I(n) papers that bear the affiliation of the group or institution. I(1) depends on the number of papers authored N(p). The power exponent R of the relationship between I(1) and N(p) categorizes scientists as solitary (R>2.5), nuclear (R = 2.25-2.5), networked (R = 2-2.25), extensively networked (R = 1.75-2) or collaborators (R<1.75). R may be used to adjust for co-authorship networking the citation impact of a scientist. I(n) similarly provides a simple measure of the effective networking size to adjust the citation impact of groups or institutions. Empirical data are provided for single scientists and institutions for the proposed metrics. Cautious adoption of adjustments for co-authorship and networking in scientific appraisals may offer incentives for more accountable co-authorship behaviour in published articles.  相似文献   

18.

Background

Citation data can be used to evaluate the editorial policies and procedures of scientific journals. Here we investigate citation counts for the three different publication tracks of the Proceedings of the National Academy of Sciences of the United States of America (PNAS). This analysis explores the consequences of differences in editor and referee selection, while controlling for the prestige of the journal in which the papers appear.

Methodology/Principal Findings

We find that papers authored and “Contributed” by NAS members (Track III) are on average cited less often than papers that are “Communicated” for others by NAS members (Track I) or submitted directly via the standard peer review process (Track II). However, we also find that the variance in the citation count of Contributed papers, and to a lesser extent Communicated papers, is larger than for direct submissions. Therefore when examining the 10% most-cited papers from each track, Contributed papers receive the most citations, followed by Communicated papers, while Direct submissions receive the least citations.

Conclusion/Significance

Our findings suggest that PNAS “Contributed” papers, in which NAS–member authors select their own reviewers, balance an overall lower impact with an increased probability of publishing exceptional papers. This analysis demonstrates that different editorial procedures are associated with different levels of impact, even within the same prominent journal, and raises interesting questions about the most appropriate metrics for judging an editorial policy''s success.  相似文献   

19.
Quantifying and comparing the scientific output of researchers has become critical for governments, funding agencies and universities. Comparison by reputation and direct assessment of contributions to the field is no longer possible, as the number of scientists increases and traditional definitions about scientific fields become blurred. The h-index is often used for comparing scientists, but has several well-documented shortcomings. In this paper, we introduce a new index for measuring and comparing the publication records of scientists: the pagerank-index (symbolised as π). The index uses a version of pagerank algorithm and the citation networks of papers in its computation, and is fundamentally different from the existing variants of h-index because it considers not only the number of citations but also the actual impact of each citation. We adapt two approaches to demonstrate the utility of the new index. Firstly, we use a simulation model of a community of authors, whereby we create various ‘groups’ of authors which are different from each other in inherent publication habits, to show that the pagerank-index is fairer than the existing indices in three distinct scenarios: (i) when authors try to ‘massage’ their index by publishing papers in low-quality outlets primarily to self-cite other papers (ii) when authors collaborate in large groups in order to obtain more authorships (iii) when authors spend most of their time in producing genuine but low quality publications that would massage their index. Secondly, we undertake two real world case studies: (i) the evolving author community of quantum game theory, as defined by Google Scholar (ii) a snapshot of the high energy physics (HEP) theory research community in arXiv. In both case studies, we find that the list of top authors vary very significantly when h-index and pagerank-index are used for comparison. We show that in both cases, authors who have collaborated in large groups and/or published less impactful papers tend to be comparatively favoured by the h-index, whereas the pagerank-index highlights authors who have made a relatively small number of definitive contributions, or written papers which served to highlight the link between diverse disciplines, or typically worked in smaller groups. Thus, we argue that the pagerank-index is an inherently fairer and more nuanced metric to quantify the publication records of scientists compared to existing measures.  相似文献   

20.
The Protein Data Bank (PDB) is the worldwide repository of 3D structures of proteins, nucleic acids and complex assemblies. The PDB’s large corpus of data (> 100,000 structures) and related citations provide a well-organized and extensive test set for developing and understanding data citation and access metrics. In this paper, we present a systematic investigation of how authors cite PDB as a data repository. We describe a novel metric based on information cascade constructed by exploring the citation network to measure influence between competing works and apply that to analyze different data citation practices to PDB. Based on this new metric, we found that the original publication of RCSB PDB in the year 2000 continues to attract most citations though many follow-up updates were published. None of these follow-up publications by members of the wwPDB organization can compete with the original publication in terms of citations and influence. Meanwhile, authors increasingly choose to use URLs of PDB in the text instead of citing PDB papers, leading to disruption of the growth of the literature citations. A comparison of data usage statistics and paper citations shows that PDB Web access is highly correlated with URL mentions in the text. The results reveal the trend of how authors cite a biomedical data repository and may provide useful insight of how to measure the impact of a data repository.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号