首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

Background

The analysis of co-authorship network aims at exploring the impact of network structure on the outcome of scientific collaborations and research publications. However, little is known about what network properties are associated with authors who have increased number of joint publications and are being cited highly.

Methodology/Principal Findings

Measures of social network analysis, for example network centrality and tie strength, have been utilized extensively in current co-authorship literature to explore different behavioural patterns of co-authorship networks. Using three SNA measures (i.e., degree centrality, closeness centrality and betweenness centrality), we explore scientific collaboration networks to understand factors influencing performance (i.e., citation count) and formation (tie strength between authors) of such networks. A citation count is the number of times an article is cited by other articles. We use co-authorship dataset of the research field of ‘steel structure’ for the year 2005 to 2009. To measure the strength of scientific collaboration between two authors, we consider the number of articles co-authored by them. In this study, we examine how citation count of a scientific publication is influenced by different centrality measures of its co-author(s) in a co-authorship network. We further analyze the impact of the network positions of authors on the strength of their scientific collaborations. We use both correlation and regression methods for data analysis leading to statistical validation. We identify that citation count of a research article is positively correlated with the degree centrality and betweenness centrality values of its co-author(s). Also, we reveal that degree centrality and betweenness centrality values of authors in a co-authorship network are positively correlated with the strength of their scientific collaborations.

Conclusions/Significance

Authors’ network positions in co-authorship networks influence the performance (i.e., citation count) and formation (i.e., tie strength) of scientific collaborations.  相似文献   

2.

Background

The impact of scientific publications has traditionally been expressed in terms of citation counts. However, scientific activity has moved online over the past decade. To better capture scientific impact in the digital era, a variety of new impact measures has been proposed on the basis of social network analysis and usage log data. Here we investigate how these new measures relate to each other, and how accurately and completely they express scientific impact.

Methodology

We performed a principal component analysis of the rankings produced by 39 existing and proposed measures of scholarly impact that were calculated on the basis of both citation and usage log data.

Conclusions

Our results indicate that the notion of scientific impact is a multi-dimensional construct that can not be adequately measured by any single indicator, although some measures are more suitable than others. The commonly used citation Impact Factor is not positioned at the core of this construct, but at its periphery, and should thus be used with caution.  相似文献   

3.
Editor's suggested further reading in BioEssays: Can we do better than existing author citation metrics? Abstract and Counting citations in texts rather than reference lists to improve the accuracy of assessing scientific contribution Abstract  相似文献   

4.
This article analyses the effect of degree of interdisciplinarity on the citation impact of individual publications for four different scientific fields. We operationalise interdisciplinarity as disciplinary diversity in the references of a publication, and rather than treating interdisciplinarity as a monodimensional property, we investigate the separate effect of different aspects of diversity on citation impact: i.e. variety, balance and disparity. We use a Tobit regression model to examine the effect of these properties of interdisciplinarity on citation impact, controlling for a range of variables associated with the characteristics of publications. We find that variety has a positive effect on impact, whereas balance and disparity have a negative effect. Our results further qualify the separate effect of these three aspects of diversity by pointing out that all three dimensions of interdisciplinarity display a curvilinear (inverted U-shape) relationship with citation impact. These findings can be interpreted in two different ways. On the one hand, they are consistent with the view that, while combining multiple fields has a positive effect in knowledge creation, successful research is better achieved through research efforts that draw on a relatively proximal range of fields, as distal interdisciplinary research might be too risky and more likely to fail. On the other hand, these results may be interpreted as suggesting that scientific audiences are reluctant to cite heterodox papers that mix highly disparate bodies of knowledge—thus giving less credit to publications that are too groundbreaking or challenging.  相似文献   

5.
The ever-increasing quantity and complexity of scientific production have made it difficult for researchers to keep track of advances in their own fields. This, together with growing popularity of online scientific communities, calls for the development of effective information filtering tools. We propose here an algorithm which simultaneously computes reputation of users and fitness of papers in a bipartite network representing an online scientific community. Evaluation on artificially-generated data and real data from the Econophysics Forum is used to determine the method''s best-performing variants. We show that when the input data is extended to a multilayer network including users, papers and authors and the algorithm is correspondingly modified, the resulting performance improves on multiple levels. In particular, top papers have higher citation count and top authors have higher h-index than top papers and top authors chosen by other algorithms. We finally show that our algorithm is robust against persistent authors (spammers) which makes the method readily applicable to the existing online scientific communities.  相似文献   

6.

Background

Aripiprazole, a second-generation antipsychotic medication, has been increasingly used in the maintenance treatment of bipolar disorder and received approval from the U.S. Food and Drug Administration for this indication in 2005. Given its widespread use, we sought to critically review the evidence supporting the use of aripiprazole in the maintenance treatment of bipolar disorder and examine how that evidence has been disseminated in the scientific literature.

Methods and Findings

We systematically searched multiple databases to identify double-blind, randomized controlled trials of aripiprazole for the maintenance treatment of bipolar disorder while excluding other types of studies, such as open-label, acute, and adjunctive studies. We then used a citation search to identify articles that cited these trials and rated the quality of their citations. Our evidence search protocol identified only two publications, both describing the results of a single trial conducted by Keck et al., which met criteria for inclusion in this review. We describe four issues that limit the interpretation of that trial as supporting the use of aripiprazole for bipolar maintenance: (1) insufficient duration to demonstrate maintenance efficacy; (2) limited generalizability due to its enriched sample; (3) possible conflation of iatrogenic adverse effects of abrupt medication discontinuation with beneficial effects of treatment; and (4) a low overall completion rate. Our citation search protocol yielded 80 publications that cited the Keck et al. trial in discussing the use of aripiprazole for bipolar maintenance. Of these, only 24 (30%) mentioned adverse events reported and four (5%) mentioned study limitations.

Conclusions

A single trial by Keck et al. represents the entirety of the literature on the use of aripiprazole for the maintenance treatment of bipolar disorder. Although careful review identifies four critical limitations to the trial''s interpretation and overall utility, the trial has been uncritically cited in the subsequent scientific literature. Please see later in the article for the Editors'' Summary  相似文献   

7.
Biomedical journals must adhere to strict standards of editorial quality. In a globalized academic scenario, biomedical journals must compete firstly to publish the most relevant original research and secondly to obtain the broadest possible visibility and the widest dissemination of their scientific contents. The cornerstone of the scientific process is still the peer-review system but additional quality criteria should be met. Recently access to medical information has been revolutionized by electronic editions.Bibliometric databases such as MEDLINE, the ISI Web of Science and Scopus offer comprehensive online information on medical literature. Classically, the prestige of biomedical journals has been measured by their impact factor but, recently, other indicators such as SCImago SJR or the Eigenfactor are emerging as alternative indices of a journal's quality. Assessing the scholarly impact of research and the merits of individual scientists remains a major challenge. Allocation of authorship credit also remains controversial.Furthermore, in our Kafkaesque world, we prefer to count rather than read the articles we judge. Quantitative publication metrics (research output) and citations analyses (scientific influence) are key determinants of the scientific success of individual investigators. However, academia is embracing new objective indicators (such as the “h” index) to evaluate scholarly merit. The present review discusses some editorial issues affecting biomedical journals, currently available bibliometric databases, bibliometric indices of journal quality and, finally, indicators of research performance and scientific success.  相似文献   

8.
Speculation over a global rise in jellyfish populations has become widespread in the scientific literature, but until recently the purported ‘global increase’ had not been tested. Here we present a citation analysis of peer‐reviewed literature to track the evolution of the current perception of increases in jellyfish and identify key papers involved in its establishment. Trend statements and citation threads were reviewed and arranged in a citation network. Trend statements were assessed according their degree of affirmation and spatial scale, and the appropriateness of the citations used to support statements was assessed. Analyses showed that 48.9% of publications misinterpreted the conclusions of cited sources, with a bias towards claiming jellyfish populations are increasing, with a single review having the most influence on the network. Collectively, these disparities resulted in a network based on unsubstantiated statements and citation threads. As a community, we must ensure our statements about scientific findings in general are accurately substantiated and carefully communicated such that incorrect perceptions, as in the case of jellyfish blooms, do not develop in the absence of rigorous testing.  相似文献   

9.
The Protein Data Bank (PDB) is the worldwide repository of 3D structures of proteins, nucleic acids and complex assemblies. The PDB’s large corpus of data (> 100,000 structures) and related citations provide a well-organized and extensive test set for developing and understanding data citation and access metrics. In this paper, we present a systematic investigation of how authors cite PDB as a data repository. We describe a novel metric based on information cascade constructed by exploring the citation network to measure influence between competing works and apply that to analyze different data citation practices to PDB. Based on this new metric, we found that the original publication of RCSB PDB in the year 2000 continues to attract most citations though many follow-up updates were published. None of these follow-up publications by members of the wwPDB organization can compete with the original publication in terms of citations and influence. Meanwhile, authors increasingly choose to use URLs of PDB in the text instead of citing PDB papers, leading to disruption of the growth of the literature citations. A comparison of data usage statistics and paper citations shows that PDB Web access is highly correlated with URL mentions in the text. The results reveal the trend of how authors cite a biomedical data repository and may provide useful insight of how to measure the impact of a data repository.  相似文献   

10.

Background

New approaches and tools were needed to support the strategic planning, implementation and management of a Program launched by the Brazilian Government to fund research, development and capacity building on neglected tropical diseases with strong focus on the North, Northeast and Center-West regions of the country where these diseases are prevalent.

Methodology/Principal Findings

Based on demographic, epidemiological and burden of disease data, seven diseases were selected by the Ministry of Health as targets of the initiative. Publications on these diseases by Brazilian researchers were retrieved from international databases, analyzed and processed with text-mining tools in order to standardize author- and institution''s names and addresses. Co-authorship networks based on these publications were assembled, visualized and analyzed with social network analysis software packages. Network visualization and analysis generated new information, allowing better design and strategic planning of the Program, enabling decision makers to characterize network components by area of work, identify institutions as well as authors playing major roles as central hubs or located at critical network cut-points and readily detect authors or institutions participating in large international scientific collaborating networks.

Conclusions/Significance

Traditional criteria used to monitor and evaluate research proposals or R&D Programs, such as researchers'' productivity and impact factor of scientific publications, are of limited value when addressing research areas of low productivity or involving institutions from endemic regions where human resources are limited. Network analysis was found to generate new and valuable information relevant to the strategic planning, implementation and monitoring of the Program. It afforded a more proactive role of the funding agencies in relation to public health and equity goals, to scientific capacity building objectives and a more consistent engagement of institutions and authors from endemic regions based on innovative criteria and parameters anchored on objective scientific data.  相似文献   

11.
This paper has two aims: (i) to introduce a novel method for measuring which part of overall citation inequality can be attributed to differences in citation practices across scientific fields, and (ii) to implement an empirical strategy for making meaningful comparisons between the number of citations received by articles in 22 broad fields. The number of citations received by any article is seen as a function of the article’s scientific influence, and the field to which it belongs. A key assumption is that articles in the same quantile of any field citation distribution have the same degree of citation impact in their respective field. Using a dataset of 4.4 million articles published in 1998–2003 with a five-year citation window, we estimate that differences in citation practices between the 22 fields account for 14% of overall citation inequality. Our empirical strategy is based on the strong similarities found in the behavior of citation distributions. We obtain three main results. Firstly, we estimate a set of average-based indicators, called exchange rates, to express the citations received by any article in a large interval in terms of the citations received in a reference situation. Secondly, using our exchange rates as normalization factors of the raw citation data reduces the effect of differences in citation practices to, approximately, 2% of overall citation inequality in the normalized citation distributions. Thirdly, we provide an empirical explanation of why the usual normalization procedure based on the fields’ mean citation rates is found to be equally successful.  相似文献   

12.

Background

Human knowledge and innovation are recorded in two media: scholarly publication and patents. These records not only document a new scientific insight or new method developed, but they also carefully cite prior work upon which the innovation is built.

Methodology

We quantify the impact of information flow across fields using two large citation dataset: one spanning over a century of scholarly work in the natural sciences, social sciences and humanities, and second spanning a quarter century of United States patents.

Conclusions

We find that a publication''s citing across disciplines is tied to its subsequent impact. In the case of patents and natural science publications, those that are cited at least once are cited slightly more when they draw on research outside of their area. In contrast, in the social sciences, citing within one''s own field tends to be positively correlated with impact.  相似文献   

13.
Journal impact factors have become an important criterion to judge the quality of scientific publications over the years, influencing the evaluation of institutions and individual researchers worldwide. However, they are also subject to a number of criticisms. Here we point out that the calculation of a journal’s impact factor is mainly based on the date of publication of its articles in print form, despite the fact that most journals now make their articles available online before that date. We analyze 61 neuroscience journals and show that delays between online and print publication of articles increased steadily over the last decade. Importantly, such a practice varies widely among journals, as some of them have no delays, while for others this period is longer than a year. Using a modified impact factor based on online rather than print publication dates, we demonstrate that online-to-print delays can artificially raise a journal’s impact factor, and that this inflation is greater for longer publication lags. We also show that correcting the effect of publication delay on impact factors changes journal rankings based on this metric. We thus suggest that indexing of articles in citation databases and calculation of citation metrics should be based on the date of an article’s online appearance, rather than on that of its publication in print.  相似文献   

14.
Authorship credit for multi-authored scientific publications is routinely allocated either by issuing full publication credit repeatedly to all coauthors, or by dividing one credit equally among all coauthors. The ensuing inflationary and equalizing biases distort derived bibliometric measures of merit by systematically benefiting secondary authors at the expense of primary authors. Here I show how harmonic counting, which allocates credit according to authorship rank and the number of coauthors, provides simultaneous source-level correction for both biases as well as accommodating further decoding of byline information. I also demonstrate large and erratic effects of counting bias on the original h-index, and show how the harmonic version of the h-index provides unbiased bibliometric ranking of scientific merit while retaining the original''s essential simplicity, transparency and intended fairness. Harmonic decoding of byline information resolves the conundrum of authorship credit allocation by providing a simple recipe for source-level correction of inflationary and equalizing bias. Harmonic counting could also offer unrivalled accuracy in automated assessments of scientific productivity, impact and achievement.  相似文献   

15.

Background

Conventional scientometric predictors of research performance such as the number of papers, citations, and papers in the top 1% of highly cited papers cannot be validated in terms of the number of Nobel Prize achievements across countries and institutions. The purpose of this paper is to find a bibliometric indicator that correlates with the number of Nobel Prize achievements.

Methodology/Principal Findings

This study assumes that the high-citation tail of citation distribution holds most of the information about high scientific performance. Here I propose the x-index, which is calculated from the number of national articles in the top 1% and 0.1% of highly cited papers and has a subtractive term to discount highly cited papers that are not scientific breakthroughs. The x-index, the number of Nobel Prize achievements, and the number of national articles in Nature or Science are highly correlated. The high correlations among these independent parameters demonstrate that they are good measures of high scientific performance because scientific excellence is their only common characteristic. However, the x-index has superior features as compared to the other two parameters. Nobel Prize achievements are low frequency events and their number is an imprecise indicator, which in addition is zero in most institutions; the evaluation of research making use of the number of publications in prestigious journals is not advised.

Conclusion

The x-index is a simple and precise indicator for high research performance.  相似文献   

16.
There are now many methods available to assess the relative citation performance of peer-reviewed journals. Regardless of their individual faults and advantages, citation-based metrics are used by researchers to maximize the citation potential of their articles, and by employers to rank academic track records. The absolute value of any particular index is arguably meaningless unless compared to other journals, and different metrics result in divergent rankings. To provide a simple yet more objective way to rank journals within and among disciplines, we developed a κ-resampled composite journal rank incorporating five popular citation indices: Impact Factor, Immediacy Index, Source-Normalized Impact Per Paper, SCImago Journal Rank and Google 5-year h-index; this approach provides an index of relative rank uncertainty. We applied the approach to six sample sets of scientific journals from Ecology (n = 100 journals), Medicine (n = 100), Multidisciplinary (n = 50); Ecology + Multidisciplinary (n = 25), Obstetrics & Gynaecology (n = 25) and Marine Biology & Fisheries (n = 25). We then cross-compared the κ-resampled ranking for the Ecology + Multidisciplinary journal set to the results of a survey of 188 publishing ecologists who were asked to rank the same journals, and found a 0.68–0.84 Spearman’s ρ correlation between the two rankings datasets. Our composite index approach therefore approximates relative journal reputation, at least for that discipline. Agglomerative and divisive clustering and multi-dimensional scaling techniques applied to the Ecology + Multidisciplinary journal set identified specific clusters of similarly ranked journals, with only Nature & Science separating out from the others. When comparing a selection of journals within or among disciplines, we recommend collecting multiple citation-based metrics for a sample of relevant and realistic journals to calculate the composite rankings and their relative uncertainty windows.  相似文献   

17.
The modern science has become more complex and interdisciplinary in its nature which might encourage researchers to be more collaborative and get engaged in larger collaboration networks. Various aspects of collaboration networks have been examined so far to detect the most determinant factors in knowledge creation and scientific production. One of the network structures that recently attracted much theoretical attention is called small world. It has been suggested that small world can improve the information transmission among the network actors. In this paper, using the data on 12 periods of journal publications of Canadian researchers in natural sciences and engineering, the co-authorship networks of the researchers are created. Through measuring small world indicators, the small worldiness of the mentioned network and its relation with researchers’ productivity, quality of their publications, and scientific team size are assessed. Our results show that the examined co-authorship network strictly exhibits the small world properties. In addition, it is suggested that in a small world network researchers expand their team size through getting connected to other experts of the field. This team size expansion may result in higher productivity of the whole team as a result of getting access to new resources, benefitting from the internal referring, and exchanging ideas among the team members. Moreover, although small world network is positively correlated with the quality of the articles in terms of both citation count and journal impact factor, it is negatively related with the average productivity of researchers in terms of the number of their publications.  相似文献   

18.
19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号