首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Scholarly collaborations across disparate scientific disciplines are challenging. Collaborators are likely to have their offices in another building, attend different conferences, and publish in other venues; they might speak a different scientific language and value an alien scientific culture. This paper presents a detailed analysis of success and failure of interdisciplinary papers—as manifested in the citations they receive. For 9.2 million interdisciplinary research papers published between 2000 and 2012 we show that the majority (69.9%) of co-cited interdisciplinary pairs are “win-win” relationships, i.e., papers that cite them have higher citation impact and there are as few as 3.3% “lose-lose” relationships. Papers citing references from subdisciplines positioned far apart (in the conceptual space of the UCSD map of science) attract the highest relative citation counts. The findings support the assumption that interdisciplinary research is more successful and leads to results greater than the sum of its disciplinary parts.  相似文献   

2.
In the era of social media there are now many different ways that a scientist can build their public profile; the publication of high-quality scientific papers being just one. While social media is a valuable tool for outreach and the sharing of ideas, there is a danger that this form of communication is gaining too high a value and that we are losing sight of key metrics of scientific value, such as citation indices. To help quantify this, I propose the ‘Kardashian Index’, a measure of discrepancy between a scientist’s social media profile and publication record based on the direct comparison of numbers of citations and Twitter followers.  相似文献   

3.
Many fields face an increasing prevalence of multi-authorship, and this poses challenges in assessing citation metrics. Here, we explore multiple citation indicators that address total impact (number of citations, Hirsch H index [H]), co-authorship adjustment (Schreiber Hm index [Hm]), and author order (total citations to papers as single; single or first; or single, first, or last author). We demonstrate the correlation patterns between these indicators across 84,116 scientists (those among the top 30,000 for impact in a single year [2013] in at least one of these indicators) and separately across 12 scientific fields. Correlation patterns vary across these 12 fields. In physics, total citations are highly negatively correlated with indicators of co-authorship adjustment and of author order, while in other sciences the negative correlation is seen only for total citation impact and citations to papers as single author. We propose a composite score that sums standardized values of these six log-transformed indicators. Of the 1,000 top-ranked scientists with the composite score, only 322 are in the top 1,000 based on total citations. Many Nobel laureates and other extremely influential scientists rank among the top-1,000 with the composite indicator, but would rank much lower based on total citations. Conversely, many of the top 1,000 authors on total citations have had no single/first/last-authored cited paper. More Nobel laureates of 2011–2015 are among the top authors when authors are ranked by the composite score than by total citations, H index, or Hm index; 40/47 of these laureates are among the top 30,000 by at least one of the six indicators. We also explore the sensitivity of indicators to self-citation and alphabetic ordering of authors in papers across different scientific fields. Multiple indicators and their composite may give a more comprehensive picture of impact, although no citation indicator, single or composite, can be expected to select all the best scientists.  相似文献   

4.

Background

Citation data can be used to evaluate the editorial policies and procedures of scientific journals. Here we investigate citation counts for the three different publication tracks of the Proceedings of the National Academy of Sciences of the United States of America (PNAS). This analysis explores the consequences of differences in editor and referee selection, while controlling for the prestige of the journal in which the papers appear.

Methodology/Principal Findings

We find that papers authored and “Contributed” by NAS members (Track III) are on average cited less often than papers that are “Communicated” for others by NAS members (Track I) or submitted directly via the standard peer review process (Track II). However, we also find that the variance in the citation count of Contributed papers, and to a lesser extent Communicated papers, is larger than for direct submissions. Therefore when examining the 10% most-cited papers from each track, Contributed papers receive the most citations, followed by Communicated papers, while Direct submissions receive the least citations.

Conclusion/Significance

Our findings suggest that PNAS “Contributed” papers, in which NAS–member authors select their own reviewers, balance an overall lower impact with an increased probability of publishing exceptional papers. This analysis demonstrates that different editorial procedures are associated with different levels of impact, even within the same prominent journal, and raises interesting questions about the most appropriate metrics for judging an editorial policy''s success.  相似文献   

5.
We use citation data of scientific articles produced by individual nations in different scientific domains to determine the structure and efficiency of national research systems. We characterize the scientific fitness of each nation—that is, the competitiveness of its research system—and the complexity of each scientific domain by means of a non-linear iterative algorithm able to assess quantitatively the advantage of scientific diversification. We find that technological leading nations, beyond having the largest production of scientific papers and the largest number of citations, do not specialize in a few scientific domains. Rather, they diversify as much as possible their research system. On the other side, less developed nations are competitive only in scientific domains where also many other nations are present. Diversification thus represents the key element that correlates with scientific and technological competitiveness. A remarkable implication of this structure of the scientific competition is that the scientific domains playing the role of “markers” of national scientific competitiveness are those not necessarily of high technological requirements, but rather addressing the most “sophisticated” needs of the society.  相似文献   

6.
Many results have been obtained when studying scientific papers citations databases in a network perspective. Articles can be ranked according to their current in-degree and their future popularity or citation counts can even be predicted. The dynamical properties of such networks and the observation of the time evolution of their nodes started more recently. This work adopts an evolutionary perspective and proposes an original algorithm for the construction of genealogical trees of scientific papers on the basis of their citation count evolution in time. The fitness of a paper now amounts to its in-degree growing trend and a “dying” paper will suddenly see this trend declining in time. It will give birth and be taken over by some of its most prevalent citing “offspring”. Practically, this might be used to trace the successive published milestones of a research field.  相似文献   

7.
In order to improve the h-index in terms of its accuracy and sensitivity to the form of the citation distribution, we propose the new bibliometric index . The basic idea is to define, for any author with a given number of citations, an “ideal” citation distribution which represents a benchmark in terms of number of papers and number of citations per publication, and to obtain an index which increases its value when the real citation distribution approaches its ideal form. The method is very general because the ideal distribution can be defined differently according to the main objective of the index. In this paper we propose to define it by a “squared-form” distribution: this is consistent with many popular bibliometric indices, which reach their maximum value when the distribution is basically a “square”. This approach generally rewards the more regular and reliable researchers, and it seems to be especially suitable for dealing with common situations such as applications for academic positions. To show the advantages of the -index some mathematical properties are proved and an application to real data is proposed.  相似文献   

8.
This paper has two aims: (i) to introduce a novel method for measuring which part of overall citation inequality can be attributed to differences in citation practices across scientific fields, and (ii) to implement an empirical strategy for making meaningful comparisons between the number of citations received by articles in 22 broad fields. The number of citations received by any article is seen as a function of the article’s scientific influence, and the field to which it belongs. A key assumption is that articles in the same quantile of any field citation distribution have the same degree of citation impact in their respective field. Using a dataset of 4.4 million articles published in 1998–2003 with a five-year citation window, we estimate that differences in citation practices between the 22 fields account for 14% of overall citation inequality. Our empirical strategy is based on the strong similarities found in the behavior of citation distributions. We obtain three main results. Firstly, we estimate a set of average-based indicators, called exchange rates, to express the citations received by any article in a large interval in terms of the citations received in a reference situation. Secondly, using our exchange rates as normalization factors of the raw citation data reduces the effect of differences in citation practices to, approximately, 2% of overall citation inequality in the normalized citation distributions. Thirdly, we provide an empirical explanation of why the usual normalization procedure based on the fields’ mean citation rates is found to be equally successful.  相似文献   

9.
The Protein Data Bank (PDB) is the worldwide repository of 3D structures of proteins, nucleic acids and complex assemblies. The PDB’s large corpus of data (> 100,000 structures) and related citations provide a well-organized and extensive test set for developing and understanding data citation and access metrics. In this paper, we present a systematic investigation of how authors cite PDB as a data repository. We describe a novel metric based on information cascade constructed by exploring the citation network to measure influence between competing works and apply that to analyze different data citation practices to PDB. Based on this new metric, we found that the original publication of RCSB PDB in the year 2000 continues to attract most citations though many follow-up updates were published. None of these follow-up publications by members of the wwPDB organization can compete with the original publication in terms of citations and influence. Meanwhile, authors increasingly choose to use URLs of PDB in the text instead of citing PDB papers, leading to disruption of the growth of the literature citations. A comparison of data usage statistics and paper citations shows that PDB Web access is highly correlated with URL mentions in the text. The results reveal the trend of how authors cite a biomedical data repository and may provide useful insight of how to measure the impact of a data repository.  相似文献   

10.
In genetic epidemiology, genome-wide association studies (GWAS) are used to rapidly scan a large set of genetic variants and thus to identify associations with a particular trait or disease. The GWAS philosophy is different to that of conventional candidate-gene-based approaches, which directly test the effects of genetic variants of potentially contributory genes in an association study. One controversial question is whether GWAS provide relevant scientific outcomes by comparison with candidate-gene studies. We thus performed a bibliometric study using two citation metrics to assess whether the GWAS have contributed a capital gain in knowledge discovery by comparison with candidate-gene approaches. We selected GWAS published between 2005 and 2009 and matched them with candidate-gene studies on the same topic and published in the same period of time. We observed that the GWAS papers have received, on average, 30±55 citations more than the candidate gene papers, 1 year after their publication date, and 39±58 citations more 2 years after their publication date. The GWAS papers were, on average, 2.8±2.4 and 2.9±2.4 times more cited than expected, 1 and 2 years after their publication date; whereas the candidate gene papers were 1.5±1.2 and 1.5±1.4 times more cited than expected. While the evaluation of the contribution to scientific research through citation metrics may be challenged, it cannot be denied that GWAS are great hypothesis generators, and are a powerful complement to candidate gene studies.  相似文献   

11.
We tested the underlying assumption that citation counts are reliable predictors of future success, analyzing complete citation data on the careers of scientists. Our results show that i) among all citation indicators, the annual citations at the time of prediction is the best predictor of future citations, ii) future citations of a scientist''s published papers can be predicted accurately ( for a 1-year prediction, ) but iii) future citations of future work are hardly predictable.  相似文献   

12.

Background

Influential medical journals shape medical science and practice and their prestige is usually appraised by citation impact metrics, such as the journal impact factor. However, how permanent are medical journals and how stable is their impact over time?

Methods and Results

We evaluated what happened to general medical journals that were publishing papers half a century ago, in 1959. Data were retrieved from ISI Web of Science for citations and PubMed (Journals function) for journal history. Of 27 eligible journals publishing in 1959, 4 have stopped circulation (including two of the most prestigious journals in 1959) and another 7 changed name between 1959 and 2009. Only 6 of these 27 journals have been published continuously with their initial name since they started circulation. The citation impact of papers published in 1959 gives a very different picture from the current journal impact factor; the correlation between the two is non-significant and very close to zero. Only 13 of the 5,223 papers published in 1959 received at least 5 citations in 2009.

Conclusions

Journals are more permanent entities than single papers, but they are also subject to major change and their relative prominence can change markedly over time.  相似文献   

13.
The accuracy of quotations and references in six medical journals published during January 1984 was assessed. The original author was misquoted in 15% of all references, and most of the errors would have misled readers. Errors in citation of references occurred in 24%, of which 8% were major errors--that is, they prevented immediate identification of the source of the reference. Inaccurate quotations and citations are displeasing for the original author, misleading for the reader, and mean that untruths become "accepted fact." Some suggestions for reducing these high levels of inaccuracy are that papers scheduled for publication with errors of citation should be returned to the author and checked completely and a permanent column specifically for misquotations could be inserted into the journal.  相似文献   

14.
Understanding how to source agricultural raw materials sustainably is challenging in today’s globalized food system given the variety of issues to be considered and the multitude of suggested indicators for representing these issues. Furthermore, stakeholders in the global food system both impact these issues and are themselves vulnerable to these issues, an important duality that is often implied but not explicitly described. The attention given to these issues and conceptual frameworks varies greatly—depending largely on the stakeholder perspective—as does the set of indicators developed to measure them. To better structure these complex relationships and assess any gaps, we collate a comprehensive list of sustainability issues and a database of sustainability indicators to represent them. To assure a breadth of inclusion, the issues are pulled from the following three perspectives: major global sustainability assessments, sustainability communications from global food companies, and conceptual frameworks of sustainable livelihoods from academic publications. These terms are integrated across perspectives using a common vocabulary, classified by their relevance to impacts and vulnerabilities, and categorized into groups by economic, environmental, physical, human, social, and political characteristics. These issues are then associated with over 2,000 sustainability indicators gathered from existing sources. A gap analysis is then performed to determine if particular issues and issue groups are over or underrepresented. This process results in 44 “integrated” issues—24 impact issues and 36 vulnerability issues —that are composed of 318 “component” issues. The gap analysis shows that although every integrated issue is mentioned at least 40% of the time across perspectives, no issue is mentioned more than 70% of the time. A few issues infrequently mentioned across perspectives also have relatively few indicators available to fully represent them. Issues in the impact framework generally have fewer gaps than those in the vulnerability framework.  相似文献   

15.
Nobel Prizes are commonly seen to be among the most prestigious achievements of our times. Based on mining several million citations, we quantitatively analyze the processes driving paradigm shifts in science. We find that groundbreaking discoveries of Nobel Prize Laureates and other famous scientists are not only acknowledged by many citations of their landmark papers. Surprisingly, they also boost the citation rates of their previous publications. Given that innovations must outcompete the rich-gets-richer effect for scientific citations, it turns out that they can make their way only through citation cascades. A quantitative analysis reveals how and why they happen. Science appears to behave like a self-organized critical system, in which citation cascades of all sizes occur, from continuous scientific progress all the way up to scientific revolutions, which change the way we see our world. Measuring the "boosting effect" of landmark papers, our analysis reveals how new ideas and new players can make their way and finally triumph in a world dominated by established paradigms. The underlying "boost factor" is also useful to discover scientific breakthroughs and talents much earlier than through classical citation analysis, which by now has become a widespread method to measure scientific excellence, influencing scientific careers and the distribution of research funds. Our findings reveal patterns of collective social behavior, which are also interesting from an attention economics perspective. Understanding the origin of scientific authority may therefore ultimately help to explain how social influence comes about and why the value of goods depends so strongly on the attention they attract.  相似文献   

16.
Dr. Manners     
Good manners make a difference—in science and elsewhere. This includes our social media etiquette as researchers. Subject Categories: S&S: History & Philosophy of Science, Methods & Resources, S&S: Ethics

Elbows off the table, please. Don’t chew with your mouth open. Don’t blow your nose at the table. Don’t put your feet up on the chair or table. And please, do not yuck my yum. These are basic table manners that have come up at some of our lab meals, and I have often wondered if it was my job to teach my trainees social graces. A good fellow scientist and friend of mine once told me it was absolutely our place as mentors to teach our trainees not only how to do science well, but also how to be well‐mannered humans. While these Emily Post‐approved table manners might seem old‐fashioned (I’m guessing some readers will have to look up Emily Post), I strongly believe they still hold a place in modern society; being in good company never goes out of style.Speaking of modern society: upon encouragement by several of my scientist friends, I joined Twitter in 2016. My motivation was mainly to hear about pre‐prints and publications, conference announcements and relevant news, science or otherwise. I also follow people who just make me laugh (I highly recommend @ConanOBrien or @dog_rates). I (re)tweet job openings, conference announcements, and interesting new data. Occasionally, I post photos from conferences, or random science‐related art. I also appreciate the sense of community that social media brings to the table. However, social media is a venue where I have also seen manners go to die. Rapidly.It is really shocking to read what some people feel perfectly comfortable tweeting. While most of us can agree that foul language and highly offensive opinions are generally considered distasteful, there are other, subtler but nonetheless equally—if not more—cringe‐worthy offenses online when I am fairly certain these people would never utter such words in real life. In the era of pandemic, the existence of people tweeting about not being able to eat at their favorite restaurant or travel to some destination holiday because of lockdown shows an egregious lack of self‐awareness. Sure it sucks to cancel a wedding due to COVID‐19, but do you need to moan to your followers—most of whom are likely total strangers—about it while other people have lost their jobs? If I had a nickel for every first‐world complaint I have seen on Twitter, I’d have retired a long time ago; although to be honest, I would do science for free. However, these examples pale in comparison with another type of tweeter: Reader, I submit to you, “the Humblebragger.”From the MacMillan Buzzword dictionary (via Google): a humblebrag is “a statement in which you pretend to be modest but which you are really using as a way of telling people about your success or achievements.” I would further translate this definition to indicate that humblebraggers are starved for attention. After joining Twitter, I quickly found many people using social media to announce how “humble and honored” they are for receiving grant or prize X, Y, or Z. In general, these are junior faculty who have perhaps not acquired the self‐awareness more senior scientists have. Perhaps the most off‐putting posts I have seen are from people who post photos of their NIH application priority scores right after study section, or their Notice of Awards (NOA). When did we ever, before social media, send little notes to each other—let alone to complete strangers—announcing our priority scores or NOAs? (Spoiler: NEVER)Some of you reading this opinion piece might have humblebragged at one or time or another, and might not understand why it is distasteful. Please let me explain. For every person who gets a fundable score, there are dozens more people who do not, and they are sad (I speak from many years of experience). While said fundable‐score person might be by someone we like—and I absolutely, positively wish them well—there are many more people who will feel lousy because they did not get funding from the same review round. When has anyone ever felt good about other people getting something that they, too, desire? I think as children, none of us liked the kid on the playground who ran around with the best new Toy of the Season. As adults, do we feel differently? Along these lines, I have never been a fan of “best poster/talk/abstract” prizes. Trainees should not be striving for these fleeting recognitions and should focus on doing the best science for Science’s sake; I really believe this competition process sets people up for life in a negative way—there, I’ve said it.Can your friends and colleagues tweet about your honors? Sure, why not, and by all means please let your well‐wishers honor you, and do thank them and graciously congratulate your trainees or colleagues for helping you to get there. But to post things yourself? Please. Don’t be surprised if you have been muted by many of your followers.It is notable that many of our most decorated scientists are not on Twitter, or at least never tweet about their accomplishments. I do not recall ever seeing a single Nobel laureate announce how humbled and honored they are about their prize. Of course, I might be wrong, but I am willing to bet the numbers are much lower than what I have observed for junior faculty. True humility will never be demonstrated by announcing your achievements to your social media followers, and I believe humblebragging reveals insecurity more than anything. I hope that many more of us can follow the lead of our top scientists both in creativity, rigor, and social media politeness.  相似文献   

17.
18.
How to quantify the impact of a researcher’s or an institution’s body of work is a matter of increasing importance to scientists, funding agencies, and hiring committees. The use of bibliometric indicators, such as the h-index or the Journal Impact Factor, have become widespread despite their known limitations. We argue that most existing bibliometric indicators are inconsistent, biased, and, worst of all, susceptible to manipulation. Here, we pursue a principled approach to the development of an indicator to quantify the scientific impact of both individual researchers and research institutions grounded on the functional form of the distribution of the asymptotic number of citations. We validate our approach using the publication records of 1,283 researchers from seven scientific and engineering disciplines and the chemistry departments at the 106 U.S. research institutions classified as “very high research activity”. Our approach has three distinct advantages. First, it accurately captures the overall scientific impact of researchers at all career stages, as measured by asymptotic citation counts. Second, unlike other measures, our indicator is resistant to manipulation and rewards publication quality over quantity. Third, our approach captures the time-evolution of the scientific impact of research institutions.  相似文献   

19.
The hypothesis of a Hierarchy of the Sciences with physical sciences at the top, social sciences at the bottom, and biological sciences in-between is nearly 200 years old. This order is intuitive and reflected in many features of academic life, but whether it reflects the “hardness” of scientific research—i.e., the extent to which research questions and results are determined by data and theories as opposed to non-cognitive factors—is controversial. This study analysed 2434 papers published in all disciplines and that declared to have tested a hypothesis. It was determined how many papers reported a “positive” (full or partial) or “negative” support for the tested hypothesis. If the hierarchy hypothesis is correct, then researchers in “softer” sciences should have fewer constraints to their conscious and unconscious biases, and therefore report more positive outcomes. Results confirmed the predictions at all levels considered: discipline, domain and methodology broadly defined. Controlling for observed differences between pure and applied disciplines, and between papers testing one or several hypotheses, the odds of reporting a positive result were around 5 times higher among papers in the disciplines of Psychology and Psychiatry and Economics and Business compared to Space Science, 2.3 times higher in the domain of social sciences compared to the physical sciences, and 3.4 times higher in studies applying behavioural and social methodologies on people compared to physical and chemical studies on non-biological material. In all comparisons, biological studies had intermediate values. These results suggest that the nature of hypotheses tested and the logical and methodological rigour employed to test them vary systematically across disciplines and fields, depending on the complexity of the subject matter and possibly other factors (e.g., a field''s level of historical and/or intellectual development). On the other hand, these results support the scientific status of the social sciences against claims that they are completely subjective, by showing that, when they adopt a scientific approach to discovery, they differ from the natural sciences only by a matter of degree.  相似文献   

20.
Tomáš Grim 《Oikos》2008,117(4):484-487
Publication output is the standard by which scientific productivity is evaluated. Despite a plethora of papers on the issue of publication and citation biases, no study has so far considered a possible effect of social activities on publication output. One of the most frequent social activities in the world is drinking alcohol. In Europe, most alcohol is consumed as beer and, based on well known negative effects of alcohol consumption on cognitive performance, I predicted negative correlations between beer consumption and several measures of scientific performance. Using a survey from the Czech Republic, that has the highest per capita beer consumption rate in the world, I show that increasing per capita beer consumption is associated with lower numbers of papers, total citations, and citations per paper (a surrogate measure of paper quality). In addition I found the same predicted trends in comparison of two separate geographic areas within the Czech Republic that are also known to differ in beer consumption rates. These correlations are consistent with the possibility that leisure time social activities might influence the quality and quantity of scientific work and may be potential sources of publication and citation biases.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号