首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The number of citations that papers receive has become significant in measuring researchers'' scientific productivity, and such measurements are important when one seeks career opportunities and research funding. Skewed citation practices can thus have profound effects on academic careers. We investigated (i) how frequently authors misinterpret original information and (ii) how frequently authors inappropriately cite reviews instead of the articles upon which the reviews are based. To reach this aim, we carried a survey of ecology journals indexed in the Web of Science and assessed the appropriateness of citations of review papers. Reviews were significantly more often cited than regular articles. In addition, 22% of citations were inaccurate, and another 15% unfairly gave credit to the review authors for other scientists'' ideas. These practices should be stopped, mainly through more open discussion among mentors, researchers and students.  相似文献   

2.
Déjà vu--a study of duplicate citations in Medline   总被引:2,自引:0,他引:2  
MOTIVATION: Duplicate publication impacts the quality of the scientific corpus, has been difficult to detect, and studies this far have been limited in scope and size. Using text similarity searches, we were able to identify signatures of duplicate citations among a body of abstracts. RESULTS: A sample of 62,213 Medline citations was examined and a database of manually verified duplicate citations was created to study author publication behavior. We found that 0.04% of the citations with no shared authors were highly similar and are thus potential cases of plagiarism. 1.35% with shared authors were sufficiently similar to be considered a duplicate. Extrapolating, this would correspond to 3500 and 117,500 duplicate citations in total, respectively. Availability: eTBLAST, an automated citation matching tool, and Déjà vu, the duplicate citation database, are freely available at http://invention.swmed.edu/ and http://spore.swmed.edu/dejavu  相似文献   

3.
Author‐level metrics are a widely used measure of scientific success. The h‐index and its variants measure publication output (number of publications) and research impact (number of citations). They are often used to influence decisions, such as allocating funding or jobs. Here, we argue that the emphasis on publication output and impact hinders scientific progress in the fields of ecology and evolution because it disincentivizes two fundamental practices: generating impactful (and therefore often long‐term) datasets and sharing data. We describe a new author‐level metric, the data‐index, which values both dataset output (number of datasets) and impact (number of data‐index citations), so promotes generating and sharing data as a result. We discuss how it could be implemented and provide user guidelines. The data‐index is designed to complement other metrics of scientific success, as scientific contributions are diverse and our value system should reflect that both for the benefit of scientific progress and to create a value system that is more equitable, diverse, and inclusive. Future work should focus on promoting other scientific contributions, such as communicating science, informing policy, mentoring other scientists, and providing open‐access code and tools.  相似文献   

4.
Data “publication” seeks to appropriate the prestige of authorship in the peer-reviewed literature to reward researchers who create useful and well-documented datasets. The scholarly communication community has embraced data publication as an incentive to document and share data. But, numerous new and ongoing experiments in implementation have not yet resolved what a data publication should be, when data should be peer-reviewed, or how data peer review should work. While researchers have been surveyed extensively regarding data management and sharing, their perceptions and expectations of data publication are largely unknown. To bring this important yet neglected perspective into the conversation, we surveyed ∼ 250 researchers across the sciences and social sciences– asking what expectations“data publication” raises and what features would be useful to evaluate the trustworthiness, evaluate the impact, and enhance the prestige of a data publication. We found that researcher expectations of data publication center on availability, generally through an open database or repository. Few respondents expected published data to be peer-reviewed, but peer-reviewed data enjoyed much greater trust and prestige. The importance of adequate metadata was acknowledged, in that almost all respondents expected data peer review to include evaluation of the data’s documentation. Formal citation in the reference list was affirmed by most respondents as the proper way to credit dataset creators. Citation count was viewed as the most useful measure of impact, but download count was seen as nearly as valuable. These results offer practical guidance for data publishers seeking to meet researcher expectations and enhance the value of published data.  相似文献   

5.
6.
Comparative statistical analyses often require data harmonization, yet the social sciences do not have clear operationalization frameworks that guide and homogenize variable coding decisions across disciplines. When faced with a need to harmonize variables researchers often look for guidance from various international studies that employ output harmonization, such as the Comparative Survey of Election Studies, which offer recoding structures for the same variable (e.g. marital status). More problematically there are no agreed documentation standards or journal requirements for reporting variable harmonization to facilitate a transparent replication process. We propose a conceptual and data-driven digital solution that creates harmonization documentation standards for publication and scholarly citation: QuickCharmStats 1.1. It is free and open-source software that allows for the organizing, documenting and publishing of data harmonization projects. QuickCharmStats starts at the conceptual level and its workflow ends with a variable recording syntax. It is therefore flexible enough to reflect a variety of theoretical justifications for variable harmonization. Using the socio-demographic variable ‘marital status’, we demonstrate how the CharmStats workflow collates metadata while being guided by the scientific standards of transparency and replication. It encourages researchers to publish their harmonization work by providing researchers who complete the peer review process a permanent identifier. Those who contribute original data harmonization work to their discipline can now be credited through citations. Finally, we propose peer-review standards for harmonization documentation, describe a route to online publishing, and provide a referencing format to cite harmonization projects. Although CharmStats products are designed for social scientists our adherence to the scientific method ensures our products can be used by researchers across the sciences.  相似文献   

7.
The Ecological Society of Australia was founded in 1959, and the society’s journal was first published in 1976. To examine how research published in the society’s journal has changed over this time, we used text mining to quantify themes and trends in the body of work published by the Australian Journal of Ecology and Austral Ecology from 1976 to 2019. We used topic models to identify 30 ‘topics’ within 2778 full‐text articles in 246 issues of the journal, followed by mixed modelling to identify topics with above‐average or below‐average popularity in terms of the number of publications or citations that they contain. We found high inter‐decadal turnover in research topics, with an early emphasis on highly specific ecosystems or processes giving way to a modern emphasis on community, spatial and fire ecology, invasive species and statistical modelling. Despite an early focus on Australian research, papers discussing South American ecosystems are now among the fastest‐growing and most frequently cited topics in the journal. Topics that were growing fastest in publication rates were not always the same as those with high citation rates. Our results provide a systematic breakdown of the topics that Austral Ecology authors and editors have chosen to research, publish and cite through time, providing a valuable window into the historical and emerging foci of the journal.  相似文献   

8.

Background

Systematic reviews of the literature occupy the highest position in currently proposed hierarchies of evidence. The aims of this study were to assess whether citation classics exist in published systematic review and meta-analysis (SRM), examine the characteristics of the most frequently cited SRM articles, and evaluate the contribution of different world regions.

Methods

The 100 most cited SRM were identified in October 2012 using the Science Citation Index database of the Institute for Scientific Information. Data were extracted by one author. Spearman’s correlation was used to assess the association between years since publication, numbers of authors, article length, journal impact factor, and average citations per year.

Results

Among the 100 citation classics, published between 1977 and 2008, the most cited article received 7308 citations and the least-cited 675 citations. The average citations per year ranged from 27.8 to 401.6. First authors from the USA produced the highest number of citation classics (n=46), followed by the UK (n=28) and Canada (n=15). The 100 articles were published in 42 journals led by the Journal of the American Medical Association (n=18), followed by the British Medical Journal (n=14) and The Lancet (n=13). There was a statistically significant positive correlation between number of authors (Spearman’s rho=0.320, p=0.001), journal impact factor (rho=0.240, p=0.016) and average citations per year. There was a statistically significant negative correlation between average citations per year and year since publication (rho = -0.636, p=0.0001). The most cited papers identified seminal contributions and originators of landmark methodological aspects of SRM and reflect major advances in the management of and predisposing factors for chronic diseases.

Conclusions

Since the late 1970s, the USA, UK, and Canada have taken leadership in the production of citation classic papers. No first author from low or middle-income countries (LMIC) led one of the most cited 100 SRM.  相似文献   

9.
In the era of social media there are now many different ways that a scientist can build their public profile; the publication of high-quality scientific papers being just one. While social media is a valuable tool for outreach and the sharing of ideas, there is a danger that this form of communication is gaining too high a value and that we are losing sight of key metrics of scientific value, such as citation indices. To help quantify this, I propose the ‘Kardashian Index’, a measure of discrepancy between a scientist’s social media profile and publication record based on the direct comparison of numbers of citations and Twitter followers.  相似文献   

10.

Background

Although being a simple and effective index that has been widely used to evaluate academic output of scientists, the h-index suffers from drawbacks. One critical disadvantage is that only h-squared citations can be inferred from the h-index, which completely ignores excess and h-tail citations, leading to unfair and inaccurate evaluations in many cases.

Methodology /Principal Findings

To solve this problem, I propose the h’-index, in which h-squared, excess and h-tail citations are all considered. Based on the citation data of the 100 most prolific economists, comparing to h-index, the h’-index shows better correlation with indices of total-citation number and citations per publication, which, although relatively reliable and widely used, do not carry the information of the citation distribution. In contrast, the h’-index possesses the ability to discriminate the shapes of citation distributions, thus leading to more accurate evaluation.

Conclusions /Significance

The h’-index improves the h-index, as well as indices of total-citation number and citations per publication, by possessing the ability to discriminate shapes of citation distribution, thus making the h’-index a better single-number index for evaluating scientific output in a way that is fairer and more reasonable.  相似文献   

11.
Do citations accumulate too slowly in the social sciences to be used to assess the quality of recent articles? I investigate whether this is the case using citation data for all articles in economics and political science published in 2006 and indexed in the Web of Science. I find that citations in the first two years after publication explain more than half of the variation in cumulative citations received over a longer period. Journal impact factors improve the correlation between the predicted and actual future ranks of journal articles when using citation data from 2006 alone but the effect declines sharply thereafter. Finally, more than half of the papers in the top 20% in 2012 were already in the top 20% in the year of publication (2006).  相似文献   

12.
Objective To determine if citation counts at two years could be predicted for clinical articles that pass basic criteria for critical appraisal using data within three weeks of publication from external sources and an online article rating service.Design Retrospective cohort study.Setting Online rating service, Canada.Participants 1274 articles from 105 journals published from January to June 2005, randomly divided into a 60:40 split to provide derivation and validation datasets.Main outcome measures 20 article and journal features, including ratings of clinical relevance and newsworthiness, routinely collected by the McMaster online rating of evidence system, compared with citation counts at two years.Results The derivation analysis showed that the regression equation accounted for 60% of the variation (R2=0.60, 95% confidence interval 0.538 to 0.629). This model applied to the validation dataset gave a similar prediction (R2=0.56, 0.476 to 0.596, shrinkage 0.04; shrinkage measures how well the derived equation matches data from the validation dataset). Cited articles in the top half and top third were predicted with 83% and 61% sensitivity and 72% and 82% specificity. Higher citations were predicted by indexing in numerous databases; number of authors; abstraction in synoptic journals; clinical relevance scores; number of cited references; and original, multicentred, and therapy articles from journals with a greater proportion of articles abstracted.Conclusion Citation counts can be reliably predicted at two years using data within three weeks of publication.  相似文献   

13.
Quantifying and comparing the scientific output of researchers has become critical for governments, funding agencies and universities. Comparison by reputation and direct assessment of contributions to the field is no longer possible, as the number of scientists increases and traditional definitions about scientific fields become blurred. The h-index is often used for comparing scientists, but has several well-documented shortcomings. In this paper, we introduce a new index for measuring and comparing the publication records of scientists: the pagerank-index (symbolised as π). The index uses a version of pagerank algorithm and the citation networks of papers in its computation, and is fundamentally different from the existing variants of h-index because it considers not only the number of citations but also the actual impact of each citation. We adapt two approaches to demonstrate the utility of the new index. Firstly, we use a simulation model of a community of authors, whereby we create various ‘groups’ of authors which are different from each other in inherent publication habits, to show that the pagerank-index is fairer than the existing indices in three distinct scenarios: (i) when authors try to ‘massage’ their index by publishing papers in low-quality outlets primarily to self-cite other papers (ii) when authors collaborate in large groups in order to obtain more authorships (iii) when authors spend most of their time in producing genuine but low quality publications that would massage their index. Secondly, we undertake two real world case studies: (i) the evolving author community of quantum game theory, as defined by Google Scholar (ii) a snapshot of the high energy physics (HEP) theory research community in arXiv. In both case studies, we find that the list of top authors vary very significantly when h-index and pagerank-index are used for comparison. We show that in both cases, authors who have collaborated in large groups and/or published less impactful papers tend to be comparatively favoured by the h-index, whereas the pagerank-index highlights authors who have made a relatively small number of definitive contributions, or written papers which served to highlight the link between diverse disciplines, or typically worked in smaller groups. Thus, we argue that the pagerank-index is an inherently fairer and more nuanced metric to quantify the publication records of scientists compared to existing measures.  相似文献   

14.
In order to improve the h-index in terms of its accuracy and sensitivity to the form of the citation distribution, we propose the new bibliometric index . The basic idea is to define, for any author with a given number of citations, an “ideal” citation distribution which represents a benchmark in terms of number of papers and number of citations per publication, and to obtain an index which increases its value when the real citation distribution approaches its ideal form. The method is very general because the ideal distribution can be defined differently according to the main objective of the index. In this paper we propose to define it by a “squared-form” distribution: this is consistent with many popular bibliometric indices, which reach their maximum value when the distribution is basically a “square”. This approach generally rewards the more regular and reliable researchers, and it seems to be especially suitable for dealing with common situations such as applications for academic positions. To show the advantages of the -index some mathematical properties are proved and an application to real data is proposed.  相似文献   

15.
The Protein Circular Dichroism Data Bank (PCDDB) [https://pcddb.cryst.bbk.ac.uk] is an established resource for the biological, biophysical, chemical, bioinformatics, and molecular biology communities. It is a freely-accessible repository of validated protein circular dichroism (CD) spectra and associated sample and metadata, with entries having links to other bioinformatics resources including, amongst others, structure (PDB), AlphaFold, and sequence (UniProt) databases, as well as to published papers which produced the data and cite the database entries. It includes primary (unprocessed) and final (processed) spectral data, which are available in both text and pictorial formats, as well as detailed sample and validation information produced for each of the entries. Recently the metadata content associated with each of the entries, as well as the number and structural breadth of the protein components included, have been expanded. The PCDDB includes data on both wild-type and mutant proteins, and because CD studies primarily examine proteins in solution, it also contains examples of the effects of different environments on their structures, plus thermal unfolding/folding series. Methods for both sequence and spectral comparisons are included.The data included in the PCDDB complement results from crystal, cryo-electron microscopy, NMR spectroscopy, bioinformatics characterisations and classifications, and other structural information available for the proteins via links to other databases. The entries in the PCDDB have been used for the development of new analytical methodologies, for interpreting spectral and other biophysical data, and for providing insight into structures and functions of individual soluble and membrane proteins and protein complexes.  相似文献   

16.
The Protein Data Bank (PDB) is the repository for three-dimensional structures of biological macromolecules, determined by experimental methods. The data in the archive is free and easily available via the Internet from any of the worldwide centers managing this global archive. These data are used by scientists, researchers, bioinformatics specialists, educators, students, and general audiences to understand biological phenomenon at a molecular level. Analysis of this structural data also inspires and facilitates new discoveries in science. This chapter describes the tools and methods currently used for deposition, processing, and release of data in the PDB. References to future enhancements are also included. Shuchismita Dutta, Kyle Burkhardt, and Ganesh J. Swaminathan have contributed equally to this work.  相似文献   

17.
18.
The Protein Data Bank   总被引:183,自引:20,他引:163  
The Protein Data Bank (PDB; http://www.rcsb.org/pdb/ ) is the single worldwide archive of structural data of biological macromolecules. This paper describes the goals of the PDB, the systems in place for data deposition and access, how to obtain further information, and near-term plans for the future development of the resource.  相似文献   

19.
Publication and citation decisions in ecology are likely influenced by many factors, potentially including journal impact factors, direction and magnitude of reported effects, and year of publication. Dissemination bias exists when publication or citation of a study depends on any of these factors. We defined several dissemination biases and determined their prevalence across many sub‐disciplines in ecology, then determined whether or not data quality also affected these biases. We identified dissemination biases in ecology by conducting a meta‐analysis of citation trends for 3867 studies included in 52 meta‐analyses. We correlated effect size, year of publication, impact factor and citation rate within each meta‐analysis. In addition, we explored how data quality as defined in meta‐analyses (sample size or variance) influenced each form of bias. We also explored how the direction of the predicted or observed effect, and the research field, influenced any biases. Year of publication did not influence citation rates. The first papers published in an area reported the strongest effects, and high impact factor journals published the most extreme effects. Effect size was more important than data quality for many publication and citation trends. Dissemination biases appear common in ecology, and although their magnitude was generally small many were associated with theory tenacity, evidenced as tendencies to cite papers that most strongly support our ideas. The consequences of this behavior are amplified by the fact that papers reporting strong effects were often of lower data quality than papers reporting much weaker effects. Furthermore, high impact factor journals published the strongest effects, generally in the absence of any correlation with data quality. Increasing awareness of the prevalence of theory tenacity, confirmation bias, and the inattention to data quality among ecologists is a first step towards reducing the impact of these biases on research in our field.  相似文献   

20.
International collaboration is becoming increasingly important for the advancement of science. To gain a more precise understanding of how factors such as international collaboration influence publication success, we divide publication success into two categories: journal placement and citation performance. Analyzing all papers published between 1996 and 2012 in eight disciplines, we find that those with more countries in their affiliations performed better in both categories. Furthermore, specific countries vary in their effects both individually and in combination. Finally, we look at the relationship between national output (in papers published) and input (in citations received) over the 17 years, expanding upon prior depictions by also plotting an expected proportion of citations based on Journal Placement. Discrepancies between this expectation and the realized proportion of citations illuminate trends in performance, such as the decline of the Global North in response to rapidly developing countries, especially China. Yet, most countries'' show little to no discrepancy, meaning that, in most cases, citation proportion can be predicted by Journal Placement alone. This reveals an extreme asymmetry between the opinions of a few reviewers and the degree to which paper acceptance and citation rates influence career advancement.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号