首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We propose a new method to assess the merit of any set of scientific papers in a given field based on the citations they receive. Given a field and a citation impact indicator, such as the mean citation or the -index, the merit of a given set of articles is identified with the probability that a randomly drawn set of articles from a given pool of articles in that field has a lower citation impact according to the indicator in question. The method allows for comparisons between sets of articles of different sizes and fields. Using a dataset acquired from Thomson Scientific that contains the articles published in the periodical literature in the period 1998–2007, we show that the novel approach yields rankings of research units different from those obtained by a direct application of the mean citation or the -index.  相似文献   

2.

Background

Citation analysis has become an important tool for research performance assessment in the medical sciences. However, different areas of medical research may have considerably different citation practices, even within the same medical field. Because of this, it is unclear to what extent citation-based bibliometric indicators allow for valid comparisons between research units active in different areas of medical research.

Methodology

A visualization methodology is introduced that reveals differences in citation practices between medical research areas. The methodology extracts terms from the titles and abstracts of a large collection of publications and uses these terms to visualize the structure of a medical field and to indicate how research areas within this field differ from each other in their average citation impact.

Results

Visualizations are provided for 32 medical fields, defined based on journal subject categories in the Web of Science database. The analysis focuses on three fields: Cardiac & cardiovascular systems, Clinical neurology, and Surgery. In each of these fields, there turn out to be large differences in citation practices between research areas. Low-impact research areas tend to focus on clinical intervention research, while high-impact research areas are often more oriented on basic and diagnostic research.

Conclusions

Popular bibliometric indicators, such as the h-index and the impact factor, do not correct for differences in citation practices between medical fields. These indicators therefore cannot be used to make accurate between-field comparisons. More sophisticated bibliometric indicators do correct for field differences but still fail to take into account within-field heterogeneity in citation practices. As a consequence, the citation impact of clinical intervention research may be substantially underestimated in comparison with basic and diagnostic research.  相似文献   

3.
Do citations accumulate too slowly in the social sciences to be used to assess the quality of recent articles? I investigate whether this is the case using citation data for all articles in economics and political science published in 2006 and indexed in the Web of Science. I find that citations in the first two years after publication explain more than half of the variation in cumulative citations received over a longer period. Journal impact factors improve the correlation between the predicted and actual future ranks of journal articles when using citation data from 2006 alone but the effect declines sharply thereafter. Finally, more than half of the papers in the top 20% in 2012 were already in the top 20% in the year of publication (2006).  相似文献   

4.
Citations measure the importance of a publication, and may serve as a proxy for its popularity and quality of its contents. Here we study the distributions of citations to publications from individual academic institutions for a single year. The average number of citations have large variations between different institutions across the world, but the probability distributions of citations for individual institutions can be rescaled to a common form by scaling the citations by the average number of citations for that institution. We find this feature seems to be universal for a broad selection of institutions irrespective of the average number of citations per article. A similar analysis for citations to publications in a particular journal in a single year reveals similar results. We find high absolute inequality for both these sets, Gini coefficients being around 0.66 and 0.58 for institutions and journals respectively. We also find that the top 25% of the articles hold about 75% of the total citations for institutions and the top 29% of the articles hold about 71% of the total citations for journals.  相似文献   

5.
Many fields face an increasing prevalence of multi-authorship, and this poses challenges in assessing citation metrics. Here, we explore multiple citation indicators that address total impact (number of citations, Hirsch H index [H]), co-authorship adjustment (Schreiber Hm index [Hm]), and author order (total citations to papers as single; single or first; or single, first, or last author). We demonstrate the correlation patterns between these indicators across 84,116 scientists (those among the top 30,000 for impact in a single year [2013] in at least one of these indicators) and separately across 12 scientific fields. Correlation patterns vary across these 12 fields. In physics, total citations are highly negatively correlated with indicators of co-authorship adjustment and of author order, while in other sciences the negative correlation is seen only for total citation impact and citations to papers as single author. We propose a composite score that sums standardized values of these six log-transformed indicators. Of the 1,000 top-ranked scientists with the composite score, only 322 are in the top 1,000 based on total citations. Many Nobel laureates and other extremely influential scientists rank among the top-1,000 with the composite indicator, but would rank much lower based on total citations. Conversely, many of the top 1,000 authors on total citations have had no single/first/last-authored cited paper. More Nobel laureates of 2011–2015 are among the top authors when authors are ranked by the composite score than by total citations, H index, or Hm index; 40/47 of these laureates are among the top 30,000 by at least one of the six indicators. We also explore the sensitivity of indicators to self-citation and alphabetic ordering of authors in papers across different scientific fields. Multiple indicators and their composite may give a more comprehensive picture of impact, although no citation indicator, single or composite, can be expected to select all the best scientists.  相似文献   

6.
Cancer research involves the use of millions of nonhuman animals and billions of dollars in public funds each year, but cures for the disease remain elusive. This article suggests ways to reduce the use of animals and save money by identifying articles that garnered few citations over the 9 years after they were published. I obtained the citations received by 786 articles in 9 general cancer journals published in 1990 and the number of animals used (where possible) in the 220 animal-based research articles. By calculating the ratio of animal number to citation number, I identified the most effective (those with many citations and few animals used) and the least effective (those with many animals and few citations) articles. Using these ratios, I compared the effectiveness of their experiments/articles for the 9 journals, author affiliations and nationalities, and funding sources. This article recommends ways in which experiments with little chance of being influential can be avoided, thus freeing resources for more worthwhile assaults on cancer.  相似文献   

7.
Bibliometric indicators increasingly affect careers, funding, and reputation of individuals, their institutions and journals themselves. In contrast to author self-citations, little is known about kinetics of journal self-citations. Here we hypothesized that they may show a generalizable pattern within particular research fields or across multiple fields. We thus analyzed self-cites to 60 journals from three research fields (multidisciplinary sciences, parasitology, and information science). We also hypothesized that the kinetics of journal self-citations and citations received from other journals of the same publisher may differ from foreign citations. We analyzed the journals published the American Association for the Advancement of Science, Nature Publishing Group, and Editura Academiei Române. We found that although the kinetics of journal self-cites is generally faster compared to foreign cites, it shows some field-specific characteristics. Particularly in information science journals, the initial increase in a share of journal self-citations during post-publication year 0 was completely absent. Self-promoting journal self-citations of top-tier journals have rather indirect but negligible direct effects on bibliometric indicators, affecting just the immediacy index and marginally increasing the impact factor itself as long as the affected journals are well established in their fields. In contrast, other forms of journal self-citations and citation stacking may severely affect the impact factor, or other citation-based indices. We identified here a network consisting of three Romanian physics journals Proceedings of the Romanian Academy, Series A, Romanian Journal of Physics, and Romanian Reports in Physics, which displayed low to moderate ratio of journal self-citations, but which multiplied recently their impact factors, and were mutually responsible for 55.9%, 64.7% and 63.3% of citations within the impact factor calculation window to the three journals, respectively. They did not receive nearly any network self-cites prior impact factor calculation window, and their network self-cites decreased sharply after the impact factor calculation window. Journal self-citations and citation stacking requires increased attention and elimination from citation indices.  相似文献   

8.

Background

Although being a simple and effective index that has been widely used to evaluate academic output of scientists, the h-index suffers from drawbacks. One critical disadvantage is that only h-squared citations can be inferred from the h-index, which completely ignores excess and h-tail citations, leading to unfair and inaccurate evaluations in many cases.

Methodology /Principal Findings

To solve this problem, I propose the h’-index, in which h-squared, excess and h-tail citations are all considered. Based on the citation data of the 100 most prolific economists, comparing to h-index, the h’-index shows better correlation with indices of total-citation number and citations per publication, which, although relatively reliable and widely used, do not carry the information of the citation distribution. In contrast, the h’-index possesses the ability to discriminate the shapes of citation distributions, thus leading to more accurate evaluation.

Conclusions /Significance

The h’-index improves the h-index, as well as indices of total-citation number and citations per publication, by possessing the ability to discriminate shapes of citation distribution, thus making the h’-index a better single-number index for evaluating scientific output in a way that is fairer and more reasonable.  相似文献   

9.
In order to improve the h-index in terms of its accuracy and sensitivity to the form of the citation distribution, we propose the new bibliometric index . The basic idea is to define, for any author with a given number of citations, an “ideal” citation distribution which represents a benchmark in terms of number of papers and number of citations per publication, and to obtain an index which increases its value when the real citation distribution approaches its ideal form. The method is very general because the ideal distribution can be defined differently according to the main objective of the index. In this paper we propose to define it by a “squared-form” distribution: this is consistent with many popular bibliometric indices, which reach their maximum value when the distribution is basically a “square”. This approach generally rewards the more regular and reliable researchers, and it seems to be especially suitable for dealing with common situations such as applications for academic positions. To show the advantages of the -index some mathematical properties are proved and an application to real data is proposed.  相似文献   

10.
Citation between papers can be treated as a causal relationship. In addition, some citation networks have a number of similarities to the causal networks in network cosmology, e.g., the similar in-and out-degree distributions. Hence, it is possible to model the citation network using network cosmology. The casual network models built on homogenous spacetimes have some restrictions when describing some phenomena in citation networks, e.g., the hot papers receive more citations than other simultaneously published papers. We propose an inhomogenous causal network model to model the citation network, the connection mechanism of which well expresses some features of citation. The node growth trend and degree distributions of the generated networks also fit those of some citation networks well.  相似文献   

11.

Background

Systematic reviews of the literature occupy the highest position in currently proposed hierarchies of evidence. The aims of this study were to assess whether citation classics exist in published systematic review and meta-analysis (SRM), examine the characteristics of the most frequently cited SRM articles, and evaluate the contribution of different world regions.

Methods

The 100 most cited SRM were identified in October 2012 using the Science Citation Index database of the Institute for Scientific Information. Data were extracted by one author. Spearman’s correlation was used to assess the association between years since publication, numbers of authors, article length, journal impact factor, and average citations per year.

Results

Among the 100 citation classics, published between 1977 and 2008, the most cited article received 7308 citations and the least-cited 675 citations. The average citations per year ranged from 27.8 to 401.6. First authors from the USA produced the highest number of citation classics (n=46), followed by the UK (n=28) and Canada (n=15). The 100 articles were published in 42 journals led by the Journal of the American Medical Association (n=18), followed by the British Medical Journal (n=14) and The Lancet (n=13). There was a statistically significant positive correlation between number of authors (Spearman’s rho=0.320, p=0.001), journal impact factor (rho=0.240, p=0.016) and average citations per year. There was a statistically significant negative correlation between average citations per year and year since publication (rho = -0.636, p=0.0001). The most cited papers identified seminal contributions and originators of landmark methodological aspects of SRM and reflect major advances in the management of and predisposing factors for chronic diseases.

Conclusions

Since the late 1970s, the USA, UK, and Canada have taken leadership in the production of citation classic papers. No first author from low or middle-income countries (LMIC) led one of the most cited 100 SRM.  相似文献   

12.
The Protein Data Bank (PDB) is the worldwide repository of 3D structures of proteins, nucleic acids and complex assemblies. The PDB’s large corpus of data (> 100,000 structures) and related citations provide a well-organized and extensive test set for developing and understanding data citation and access metrics. In this paper, we present a systematic investigation of how authors cite PDB as a data repository. We describe a novel metric based on information cascade constructed by exploring the citation network to measure influence between competing works and apply that to analyze different data citation practices to PDB. Based on this new metric, we found that the original publication of RCSB PDB in the year 2000 continues to attract most citations though many follow-up updates were published. None of these follow-up publications by members of the wwPDB organization can compete with the original publication in terms of citations and influence. Meanwhile, authors increasingly choose to use URLs of PDB in the text instead of citing PDB papers, leading to disruption of the growth of the literature citations. A comparison of data usage statistics and paper citations shows that PDB Web access is highly correlated with URL mentions in the text. The results reveal the trend of how authors cite a biomedical data repository and may provide useful insight of how to measure the impact of a data repository.  相似文献   

13.
A number of new metrics based on social media platforms—grouped under the term “altmetrics”—have recently been introduced as potential indicators of research impact. Despite their current popularity, there is a lack of information regarding the determinants of these metrics. Using publication and citation data from 1.3 million papers published in 2012 and covered in Thomson Reuters’ Web of Science as well as social media counts from Altmetric.com, this paper analyses the main patterns of five social media metrics as a function of document characteristics (i.e., discipline, document type, title length, number of pages and references) and collaborative practices and compares them to patterns known for citations. Results show that the presence of papers on social media is low, with 21.5% of papers receiving at least one tweet, 4.7% being shared on Facebook, 1.9% mentioned on blogs, 0.8% found on Google+ and 0.7% discussed in mainstream media. By contrast, 66.8% of papers have received at least one citation. Our findings show that both citations and social media metrics increase with the extent of collaboration and the length of the references list. On the other hand, while editorials and news items are seldom cited, it is these types of document that are the most popular on Twitter. Similarly, while longer papers typically attract more citations, an opposite trend is seen on social media platforms. Finally, contrary to what is observed for citations, it is papers in the Social Sciences and humanities that are the most often found on social media platforms. On the whole, these findings suggest that factors driving social media and citations are different. Therefore, social media metrics cannot actually be seen as alternatives to citations; at most, they may function as complements to other type of indicators.  相似文献   

14.
Quantifying and comparing the scientific output of researchers has become critical for governments, funding agencies and universities. Comparison by reputation and direct assessment of contributions to the field is no longer possible, as the number of scientists increases and traditional definitions about scientific fields become blurred. The h-index is often used for comparing scientists, but has several well-documented shortcomings. In this paper, we introduce a new index for measuring and comparing the publication records of scientists: the pagerank-index (symbolised as π). The index uses a version of pagerank algorithm and the citation networks of papers in its computation, and is fundamentally different from the existing variants of h-index because it considers not only the number of citations but also the actual impact of each citation. We adapt two approaches to demonstrate the utility of the new index. Firstly, we use a simulation model of a community of authors, whereby we create various ‘groups’ of authors which are different from each other in inherent publication habits, to show that the pagerank-index is fairer than the existing indices in three distinct scenarios: (i) when authors try to ‘massage’ their index by publishing papers in low-quality outlets primarily to self-cite other papers (ii) when authors collaborate in large groups in order to obtain more authorships (iii) when authors spend most of their time in producing genuine but low quality publications that would massage their index. Secondly, we undertake two real world case studies: (i) the evolving author community of quantum game theory, as defined by Google Scholar (ii) a snapshot of the high energy physics (HEP) theory research community in arXiv. In both case studies, we find that the list of top authors vary very significantly when h-index and pagerank-index are used for comparison. We show that in both cases, authors who have collaborated in large groups and/or published less impactful papers tend to be comparatively favoured by the h-index, whereas the pagerank-index highlights authors who have made a relatively small number of definitive contributions, or written papers which served to highlight the link between diverse disciplines, or typically worked in smaller groups. Thus, we argue that the pagerank-index is an inherently fairer and more nuanced metric to quantify the publication records of scientists compared to existing measures.  相似文献   

15.
Research activity related to different aspects of prevention, prediction, diagnosis and treatment of brain metastases has increased during recent years. One of the major databases (Scopus) contains 942 scientific articles that were published during the 5-year time period 2006–2010. Of these, 195 (21%) reported on single patient cases and 12 (1%) were reports of 2 cases. Little is known about their influence on advancement of the field or scientific merits. Do brain metastases case reports attract attention and provide stimuli for further research or do they go largely unrecognized? Different measures of impact, visibility and quality of published research are available, each with its own pros and cons. For the present evaluation, article citation rate was chosen. The median number of citations overall and stratified by year of publication was 0, except for the year 2006 when it was 2. As compared to other articles, case reports remained more often without citation (p<0.05 except for 2006 data). All case reports with 10 or more citations (n = 6) reported on newly introduced anticancer drugs, which commonly are prescribed to treat extracranial metastases, and the responses observed in single patients with brain metastases. Average annual numbers of citations were also calculated. The articles with most citations per year were the same six case reports mentioned above (the only ones that obtained more than 2.0 citations per year). Citations appeared to gradually increase during the first two years after publication but remained on a generally low or modest level. It cannot be excluded that case reports without citation provide interesting information to some clinicians or researchers. Apparently, case reports describing unexpected therapeutic success gain more attention, at least in terms of citation, than others.  相似文献   

16.
The Internet has dramatically expanded citizens’ access to and ability to engage with political information. On many websites, any user can contribute and edit “crowd-sourced” information about important political figures. One of the most prominent examples of crowd-sourced information on the Internet is Wikipedia, a free and open encyclopedia created and edited entirely by users, and one of the world’s most accessed websites. While previous studies of crowd-sourced information platforms have found them to be accurate, few have considered biases in what kinds of information are included. We report the results of four randomized field experiments that sought to explore what biases exist in the political articles of this collaborative website. By randomly assigning factually true but either positive or negative and cited or uncited information to the Wikipedia pages of U.S. senators, we uncover substantial evidence of an editorial bias toward positivity on Wikipedia: Negative facts are 36% more likely to be removed by Wikipedia editors than positive facts within 12 hours and 29% more likely within 3 days. Although citations substantially increase an edit’s survival time, the editorial bias toward positivity is not eliminated by inclusion of a citation. We replicate this study on the Wikipedia pages of deceased as well as recently retired but living senators and find no evidence of an editorial bias in either. Our results demonstrate that crowd-sourced information is subject to an editorial bias that favors the politically active.  相似文献   

17.
International collaboration is becoming increasingly important for the advancement of science. To gain a more precise understanding of how factors such as international collaboration influence publication success, we divide publication success into two categories: journal placement and citation performance. Analyzing all papers published between 1996 and 2012 in eight disciplines, we find that those with more countries in their affiliations performed better in both categories. Furthermore, specific countries vary in their effects both individually and in combination. Finally, we look at the relationship between national output (in papers published) and input (in citations received) over the 17 years, expanding upon prior depictions by also plotting an expected proportion of citations based on Journal Placement. Discrepancies between this expectation and the realized proportion of citations illuminate trends in performance, such as the decline of the Global North in response to rapidly developing countries, especially China. Yet, most countries'' show little to no discrepancy, meaning that, in most cases, citation proportion can be predicted by Journal Placement alone. This reveals an extreme asymmetry between the opinions of a few reviewers and the degree to which paper acceptance and citation rates influence career advancement.  相似文献   

18.
For several decades, a leading paradigm of how to quantitatively assess scientific research has been the analysis of the aggregated citation information in a set of scientific publications. Although the representation of this information as a citation network has already been coined in the 1960s, it needed the systematic indexing of scientific literature to allow for impact metrics that actually made use of this network as a whole, improving on the then prevailing metrics that were almost exclusively based on the number of direct citations. However, besides focusing on the assignment of credit, the paper citation network can also be studied in terms of the proliferation of scientific ideas. Here we introduce a simple measure based on the shortest-paths in the paper''s in-component or, simply speaking, on the shape and size of the wake of a paper within the citation network. Applied to a citation network containing Physical Review publications from more than a century, our approach is able to detect seminal articles which have introduced concepts of obvious importance to the further development of physics. We observe a large fraction of papers co-authored by Nobel Prize laureates in physics among the top-ranked publications.  相似文献   

19.
20.
The number of citations that papers receive has become significant in measuring researchers'' scientific productivity, and such measurements are important when one seeks career opportunities and research funding. Skewed citation practices can thus have profound effects on academic careers. We investigated (i) how frequently authors misinterpret original information and (ii) how frequently authors inappropriately cite reviews instead of the articles upon which the reviews are based. To reach this aim, we carried a survey of ecology journals indexed in the Web of Science and assessed the appropriateness of citations of review papers. Reviews were significantly more often cited than regular articles. In addition, 22% of citations were inaccurate, and another 15% unfairly gave credit to the review authors for other scientists'' ideas. These practices should be stopped, mainly through more open discussion among mentors, researchers and students.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号