首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

Background

There is an increasing need to evaluate the production and impact of medical research produced by institutions. Many indicators exist, yet we do not have enough information about their relevance. The objective of this systematic review was (1) to identify all the indicators that could be used to measure the output and outcome of medical research carried out in institutions and (2) enlist their methodology, use, positive and negative points.

Methodology

We have searched 3 databases (Pubmed, Scopus, Web of Science) using the following keywords: [Research outcome* OR research output* OR bibliometric* OR scientometric* OR scientific production] AND [indicator* OR index* OR evaluation OR metrics]. We included articles presenting, discussing or evaluating indicators measuring the scientific production of an institution. The search was conducted by two independent authors using a standardised data extraction form. For each indicator we extracted its definition, calculation, its rationale and its positive and negative points. In order to reduce bias, data extraction and analysis was performed by two independent authors.

Findings

We included 76 articles. A total of 57 indicators were identified. We have classified those indicators into 6 categories: 9 indicators of research activity, 24 indicators of scientific production and impact, 5 indicators of collaboration, 7 indicators of industrial production, 4 indicators of dissemination, 8 indicators of health service impact. The most widely discussed and described is the h-index with 31 articles discussing it.

Discussion

The majority of indicators found are bibliometric indicators of scientific production and impact. Several indicators have been developed to improve the h-index. This indicator has also inspired the creation of two indicators to measure industrial production and collaboration. Several articles propose indicators measuring research impact without detailing a methodology for calculating them. Many bibliometric indicators identified have been created but have not been used or further discussed.  相似文献   

2.
BackgroundThe need to evaluate curricula for sponsorship for research projects or professional promotion, has led to the search for tools that allow an objective valuation. However, the total number papers published, or citations of articles of a particular author, or the impact factor of the Journal where they are published are inadequate indicators for the evaluation of the quality and productivity of researchers. The h index, proposed by Hirsch, categorises the papers according to the number of citations per article. This tool appears to lack the limitations of other bibliometric tools but is less useful for non English-speaking authors.AimsTo propose and debate the usefulness of the existing bibliometric indicators and tools for the evaluation and categorization of researchers and scientific journals.MethodsSearch for papers on bibliometric tools.ResultsThere are some hot spots in the debate on the national and international evaluation of researchers’ productivity and quality of scientific journals. Opinions on impact factors and h index have been discussed. The positive discrimination, using the Q value, is proposed as an alternative for the evaluation of Spanish and Iberoamerican researchers.ConclusionsIt is very important de-mystify the importance of bibliometric indicators. The impact factor is useful for evaluating journals from the same scientific area but not for the evaluation of researchers’ curricula. For the comparison of curricula from two or more researchers, we must use the h index or the proposed Q value. the latter allows positive discrimination of the task for Spanish and Iberoamerican researchers.  相似文献   

3.
The impact of grants on research productivity has been investigated by a number of retrospective studies. The results of these studies vary considerably. The objective of my study was to investigate the impact of funding through the RF President’s grants for young scientists on the research productivity of awarded applicants. The study compared the number of total articles and citations for awarded and rejected applicants who in 2007 took part in competitions for young candidates of science (CoS’s) and doctors of science (DoS’s) in the scientific field of medicine. The bibliometric analysis was conducted for the period from 2003 to 2012 (five years before and after the competition). The source of bibliometric data is the eLIBRARY.RU database. The impact of grants on the research productivity of Russian young scientists was assessed using the meta-analytical approach based on data from quasi-experimental studies conducted in other countries. The competition featured 149 CoS’s and 41 DoS’s, out of which 24 (16%) and 22 (54%) applicants, respectively, obtained funding. No difference in the number of total articles and citations at baseline, as well as in 2008–2012, for awarded and rejected applicants was found. The combination of data from the Russian study and other quasi-experimental studies (6 studies, 10 competitions) revealed a small treatment effect – an increase in the total number of publications over a 4–5-year period after the competition by 1.23 (95% CI 0.48–1.97). However, the relationship between the number of total publications published by applicants before and after the competition revealed that this treatment effect is an effect of the “maturation” of scientists with a high baseline publication activity – not of grant funding.  相似文献   

4.

Background

The peer review system has been traditionally challenged due to its many limitations especially for allocating funding. Bibliometric indicators may well present themselves as a complement.

Objective

We analyze the relationship between peers’ ratings and bibliometric indicators for Spanish researchers in the 2007 National R&D Plan for 23 research fields.

Methods and Materials

We analyze peers’ ratings for 2333 applications. We also gathered principal investigators’ research output and impact and studied the differences between accepted and rejected applications. We used the Web of Science database and focused on the 2002-2006 period. First, we analyzed the distribution of granted and rejected proposals considering a given set of bibliometric indicators to test if there are significant differences. Then, we applied a multiple logistic regression analysis to determine if bibliometric indicators can explain by themselves the concession of grant proposals.

Results

63.4% of the applications were funded. Bibliometric indicators for accepted proposals showed a better previous performance than for those rejected; however the correlation between peer review and bibliometric indicators is very heterogeneous among most areas. The logistic regression analysis showed that the main bibliometric indicators that explain the granting of research proposals in most cases are the output (number of published articles) and the number of papers published in journals that belong to the first quartile ranking of the Journal Citations Report.

Discussion

Bibliometric indicators predict the concession of grant proposals at least as well as peer ratings. Social Sciences and Education are the only areas where no relation was found, although this may be due to the limitations of the Web of Science’s coverage. These findings encourage the use of bibliometric indicators as a complement to peer review in most of the analyzed areas.  相似文献   

5.

Objective

To compare expert assessment with bibliometric indicators as tools to assess the quality and importance of scientific research papers.

Methods and Materials

Shortly after their publication in 2005, the quality and importance of a cohort of nearly 700 Wellcome Trust (WT) associated research papers were assessed by expert reviewers; each paper was reviewed by two WT expert reviewers. After 3 years, we compared this initial assessment with other measures of paper impact.

Results

Shortly after publication, 62 (9%) of the 687 research papers were determined to describe at least a ‘major addition to knowledge’ –6 were thought to be ‘landmark’ papers. At an aggregate level, after 3 years, there was a strong positive association between expert assessment and impact as measured by number of citations and F1000 rating. However, there were some important exceptions indicating that bibliometric measures may not be sufficient in isolation as measures of research quality and importance, and especially not for assessing single papers or small groups of research publications.

Conclusion

When attempting to assess the quality and importance of research papers, we found that sole reliance on bibliometric indicators would have led us to miss papers containing important results as judged by expert review. In particular, some papers that were highly rated by experts were not highly cited during the first three years after publication. Tools that link expert peer reviews of research paper quality and importance to more quantitative indicators, such as citation analysis would be valuable additions to the field of research assessment and evaluation.  相似文献   

6.
The phenomenon of oral tolerance refers to a local and systemic state of tolerance induced in the gut after its exposure to innocuous antigens. Recent findings have shown the interrelationship between cellular and molecular components of oral tolerance, but its representation through a network of interactions has not been investigated. Our work aims at identifying the causal relationship of each element in an oral tolerance network, and also to propose a phenomenological model that’s capable of predicting the stochastic behavior of this network when under manipulation. We compared the changes of a “healthy” network caused by “knock-outs” (KOs) in two approaches: an analytical approach by the Perron Frobenius theory; and a computational approach, which we describe within this work in order to find numerical results for the model. Both approaches have shown the most relevant immunological components for this phenomena, that happens to corroborate the empirical results from animal models. Besides explain in a intelligible fashion how the components interacts in a complex manner, we also managed to describe and quantify the importance of KOs that hasn’t been empirically tested.  相似文献   

7.
Regarding postdocs as disposable labour with limited contracts is damaging for science. Universities need to offer them better career perspectives. Subject Categories: Careers, Science Policy & Publishing

In many academic systems, permanent positions for scientists (“tenure”) are a rare exception. In Germany, 90% of the researchers employed in academia work on temporary contracts, often with less than a year’s duration. Most of the workforce on short‐term contracts are early‐career researchers (ECRs): PhD students, postdocs, or principal investigators aspiring to beome tenured professors. Given the short‐term perspectives and uncertain contract renewals, and because only a small fraction of the ECRs will eventually get a tenured position, planning the future is difficult or even impossible for them. This creates a toxic environment of hypercompetition, perverse incentives, and steep hierarchies underpinning this system, which discourages many highly competent and motivated young scientists who eventually leave in frustration. In the life sciences in particular, decisions about hiring or promotions are often based on indicators such as journal impact factor or the amount of third‐party funding. Such metrics purport to objectively quantify research quality and innovation, but instead, they foster a culture of questionable research practices, selective or non‐reporting, exaggerating the interpretation of results, and an emphasis on quantity over quality. Much has been written about this situation (Alberts et al, 2014), and there is a broad consensus among researchers, research administrators, funders, and learned societies on the need to reform the academic system.
Given the short‐term perspectives and uncertain contract renewals, and because only a small fraction of the ECRs will eventually get a tenured position, planning the future is difficult or even impossible for them.
  相似文献   

8.
Biomedical journals must adhere to strict standards of editorial quality. In a globalized academic scenario, biomedical journals must compete firstly to publish the most relevant original research and secondly to obtain the broadest possible visibility and the widest dissemination of their scientific contents. The cornerstone of the scientific process is still the peer-review system but additional quality criteria should be met. Recently access to medical information has been revolutionized by electronic editions.Bibliometric databases such as MEDLINE, the ISI Web of Science and Scopus offer comprehensive online information on medical literature. Classically, the prestige of biomedical journals has been measured by their impact factor but, recently, other indicators such as SCImago SJR or the Eigenfactor are emerging as alternative indices of a journal's quality. Assessing the scholarly impact of research and the merits of individual scientists remains a major challenge. Allocation of authorship credit also remains controversial.Furthermore, in our Kafkaesque world, we prefer to count rather than read the articles we judge. Quantitative publication metrics (research output) and citations analyses (scientific influence) are key determinants of the scientific success of individual investigators. However, academia is embracing new objective indicators (such as the “h” index) to evaluate scholarly merit. The present review discusses some editorial issues affecting biomedical journals, currently available bibliometric databases, bibliometric indices of journal quality and, finally, indicators of research performance and scientific success.  相似文献   

9.
In order to improve the h-index in terms of its accuracy and sensitivity to the form of the citation distribution, we propose the new bibliometric index . The basic idea is to define, for any author with a given number of citations, an “ideal” citation distribution which represents a benchmark in terms of number of papers and number of citations per publication, and to obtain an index which increases its value when the real citation distribution approaches its ideal form. The method is very general because the ideal distribution can be defined differently according to the main objective of the index. In this paper we propose to define it by a “squared-form” distribution: this is consistent with many popular bibliometric indices, which reach their maximum value when the distribution is basically a “square”. This approach generally rewards the more regular and reliable researchers, and it seems to be especially suitable for dealing with common situations such as applications for academic positions. To show the advantages of the -index some mathematical properties are proved and an application to real data is proposed.  相似文献   

10.
Greater investment is required in developing new drugs and vaccines against malaria in order to eradicate malaria. These precious funds must be carefully managed to achieve the greatest impact. We evaluate existing efforts to discover and develop new drugs and vaccines for malaria to determine how best malaria R&D can benefit from an enhanced open source approach and how such a business model may operate. We assess research articles, patents, clinical trials and conducted a smaller survey among malaria researchers. Our results demonstrate that the public and philanthropic sectors are financing and performing the majority of malaria drug/vaccine discovery and development, but are then restricting access through patents, ‘closed’ publications and hidden away physical specimens. This makes little sense since it is also the public and philanthropic sector that purchases the drugs and vaccines. We recommend that a more “open source” approach is taken by making the entire value chain more efficient through greater transparency which may lead to more extensive collaborations. This can, for example, be achieved by empowering an existing organization like the Medicines for Malaria Venture (MMV) to act as a clearing house for malaria-related data. The malaria researchers that we surveyed indicated that they would utilize such registry data to increase collaboration. Finally, we question the utility of publicly or philanthropically funded patents for malaria medicines, where little to no profits are available. Malaria R&D benefits from a publicly and philanthropically funded architecture, which starts with academic research institutions, product development partnerships, commercialization assistance through UNITAID and finally procurement through mechanisms like The Global Fund to Fight AIDS, Tuberculosis and Malaria and the U.S.’ President’s Malaria Initiative. We believe that a fresh look should be taken at the cost/benefit of patents particularly related to new malaria medicines and consider alternative incentives, like WHO prequalification.  相似文献   

11.

Background

Although researchers have worked in collaboration since the origins of modern science and the publication of the first scientific journals in the eighteenth century, this phenomenon has acquired exceptional importance in the last several decades. Since the mid-twentieth century, new knowledge has been generated from within an ever-growing network of investigators, working cooperatively in research groups across countries and institutions. Cooperation is a crucial determinant of academic success.

Objective

The aim of the present paper is to analyze the evolution of scientific collaboration at the micro level, with regard to the scientific production generated on psoriasis research.

Methods

A bibliographic search in the Medline database containing the MeSH terms “psoriasis” or “psoriatic arthritis” was carried out. The search results were limited to articles, reviews and letters. After identifying the co-authorships of documents on psoriasis indexed in the Medline database (1942–2013), various bibliometric indicators were obtained, including the average number of authors per document and degree of multi-authorship over time. In addition, we performed a network analysis to study the evolution of certain features of the co-authorship network as a whole: average degree, size of the largest component, clustering coefficient, density and average distance. We also analyzed the evolution of the giant component to characterize the changing research patterns in the field, and we calculated social network indicators for the nodes, namely betweenness and closeness.

Results

The main active research clusters in the area were identified, along with their authors of reference. Our analysis of 28,670 documents sheds light on different aspects related to the evolution of scientific collaboration in the field, including the progressive increase in the mean number of co-authors (which stood at 5.17 in the 2004–2013 decade), and the rise in multi-authored papers signed by many different authors (in the same decade, 25.77% of the documents had between 6 and 9 co-authors, and 10.28% had 10 or more). With regard to the network indicators, the average degree gradually increased up to 10.97 in the study period. The percentage of authors pertaining to the largest component also rose to 73.02% of the authors. The clustering coefficient, on the other hand, remained stable throughout the entire 70-year period, with values hovering around 0.9. Finally, the average distance peaked in the decades 1974–1983 (8.29) and 1984–2003 (8.12) then fell over the next two decades, down to 5.25 in 2004–2013. The construction of the co-authorship network (threshold of collaboration ≥ 10 co-authored works) revealed a giant component of 161 researchers, containing 6 highly cohesive sub-components.

Conclusions

Our study reveals the existence of a growing research community in which collaboration is increasingly important. We can highlight an essential feature associated with scientific collaboration: multi-authored papers, with growing numbers of collaborators contributing to them, are becoming more and more common, therefore the formation of research groups of increasing depth (specialization) and breadth (multidisciplinarity) is now a cornerstone of research success.  相似文献   

12.
BackgroundDuring 2017, twenty health districts (locations) implemented a dengue outbreak Early Warning and Response System (EWARS) in Mexico, which processes epidemiological, meteorological and entomological alarm indicators to predict dengue outbreaks and triggers early response activities.Out of the 20 priority districts where more than one fifth of all national disease transmission in Mexico occur, eleven districts were purposely selected and analyzed. Nine districts presented outbreak alarms by EWARS but without subsequent outbreaks (“non-outbreak districts”) and two presented alarms with subsequent dengue outbreaks (“outbreak districts”). This evaluation study assesses and compares the impact of alarm-informed response activities and the consequences of failing a timely and adequate response across the outbreak groups.MethodsFive indicators of dengue outbreak response (larval control, entomological studies with water container interventions, focal spraying and indoor residual spraying) were quantitatively analyzed across two groups (”outbreak districts” and “non-outbreak districts”). However, for quality control purposes, only qualitative concluding remarks were derived from the fifth response indicator (fogging).ResultsThe average coverage of vector control responses was significantly higher in non-outbreak districts and across all four indicators. In the “outbreak districts” the response activities started late and were of much lower intensity compared to “non-outbreak districts”. Vector control teams at districts-level demonstrated diverse levels of compliance with local guidelines for ‘initial’, ‘early’ and ‘late’ responses to outbreak alarms, which could potentially explain the different outcomes observed following the outbreak alarms.ConclusionFailing timely and adequate response of alarm signals generated by EWARS showed to negatively impact the disease outbreak control process. On the other hand, districts with adequate and timely response guided by alarm signals demonstrated successful records of outbreak prevention. This study presents important operational scenarios when failing or successding EWARS but warrants investigating the effectiveness and cost-effectiveness of EWARS using a more robust designs.  相似文献   

13.
Access to clean water is a grand challenge in the 21st century. Water safety testing for pathogens currently depends on surrogate measures such as fecal indicator bacteria (e.g., E. coli). Metagenomics concerns high-throughput, culture-independent, unbiased shotgun sequencing of DNA from environmental samples that might transform water safety by detecting waterborne pathogens directly instead of their surrogates. Yet emerging innovations such as metagenomics are often fiercely contested. Innovations are subject to shaping/construction not only by technology but also social systems/values in which they are embedded, such as experts’ attitudes towards new scientific evidence. We conducted a classic three-round Delphi survey, comprised of 107 questions. A multidisciplinary expert panel (n = 24) representing the continuum of discovery scientists and policymakers evaluated the emergence of metagenomics tests. To the best of our knowledge, we report here the first Delphi foresight study of experts’ attitudes on (1) the top 10 priority evidentiary criteria for adoption of metagenomics tests for water safety, (2) the specific issues critical to governance of metagenomics innovation trajectory where there is consensus or dissensus among experts, (3) the anticipated time lapse from discovery to practice of metagenomics tests, and (4) the role and timing of public engagement in development of metagenomics tests. The ability of a test to distinguish between harmful and benign waterborne organisms, analytical/clinical sensitivity, and reproducibility were the top three evidentiary criteria for adoption of metagenomics. Experts agree that metagenomic testing will provide novel information but there is dissensus on whether metagenomics will replace the current water safety testing methods or impact the public health end points (e.g., reduction in boil water advisories). Interestingly, experts view the publics relevant in a “downstream capacity” for adoption of metagenomics rather than a co-productionist role at the “upstream” scientific design stage of metagenomics tests. In summary, these findings offer strategic foresight to govern metagenomics innovations symmetrically: by identifying areas where acceleration (e.g., consensus areas) and deceleration/reconsideration (e.g., dissensus areas) of the innovation trajectory might be warranted. Additionally, we show how scientific evidence is subject to potential social construction by experts’ value systems and the need for greater upstream public engagement on metagenomics innovations.  相似文献   

14.
15.
The proper allocation of public health resources for research and control requires quantification of both a disease''s current burden and the trend in its impact. Infectious diseases that have been labeled as “emerging infectious diseases” (EIDs) have received heightened scientific and public attention and resources. However, the label ‘emerging’ is rarely backed by quantitative analysis and is often used subjectively. This can lead to over-allocation of resources to diseases that are incorrectly labelled “emerging,” and insufficient allocation of resources to diseases for which evidence of an increasing or high sustained impact is strong. We suggest a simple quantitative approach, segmented regression, to characterize the trends and emergence of diseases. Segmented regression identifies one or more trends in a time series and determines the most statistically parsimonious split(s) (or joinpoints) in the time series. These joinpoints in the time series indicate time points when a change in trend occurred and may identify periods in which drivers of disease impact change. We illustrate the method by analyzing temporal patterns in incidence data for twelve diseases. This approach provides a way to classify a disease as currently emerging, re-emerging, receding, or stable based on temporal trends, as well as to pinpoint the time when the change in these trends happened. We argue that quantitative approaches to defining emergence based on the trend in impact of a disease can, with appropriate context, be used to prioritize resources for research and control. Implementing this more rigorous definition of an EID will require buy-in and enforcement from scientists, policy makers, peer reviewers and journal editors, but has the potential to improve resource allocation for global health.  相似文献   

16.
The National Institute of General Medical Sciences (NIGMS) at the U.S. National Institutes of Health (NIH) is committed to supporting the safety of the nation’s biomedical research and training environments. Institutional training grants affect many trainees and can have a broad influence across their parent institutions, making them good starting points for our initial efforts to promote the development and maintenance of robust cultures of safety at U.S. academic institutions. In this Perspective, we focus on laboratory safety, although many of the strategies we describe for improving laboratory safety are also applicable to other forms of safety including the prevention of harassment, intimidation, and discrimination. We frame the problem of laboratory safety using a number of recent examples of tragic accidents, highlight some of the lessons that have been learned from these and other events, discuss what NIGMS is doing to address problems related to laboratory safety, and outline steps that institutions can take to improve their safety cultures.

All new funding opportunity announcements (FOAs) for training programs supported by the National Institute of General Medical Sciences (NIGMS) contain the expectation that the programs will promote “inclusive, safe and supportive scientific and training environments.” In this context, the word “safe” refers to several aspects of safety. First, we mean an environment free from harassment and intimidation, in which everyone participating is treated in a respectful and supportive manner, optimized for productive learning and research. We also mean that institutions should ensure that their campuses are as safe as possible so that individuals can focus on their studies and research. Finally, we mean safety in the laboratory and clinical spaces. In this Perspective, we focus on this last issue and describe some of the approaches NIGMS is taking to help the biomedical research community move toward an enhanced culture of safety in which core values and the behaviors of leadership, principal investigators (PIs), research staff, and trainees emphasize safety over competing goals.  相似文献   

17.
During sentence production, linguistic information (semantics, syntax, phonology) of words is retrieved and assembled into a meaningful utterance. There is still debate on how we assemble single words into more complex syntactic structures such as noun phrases or sentences. In the present study, event-related potentials (ERPs) were used to investigate the time course of syntactic planning. Thirty-three volunteers described visually animated scenes using naming formats varying in syntactic complexity: from simple words (‘W’, e.g., “triangle”, “red”, “square”, “green”, “to fly towards”), to noun phrases (‘NP’, e.g., “the red triangle”, “the green square”, “to fly towards”), to a sentence (‘S’, e.g., “The red triangle flies towards the green square.”). Behaviourally, we observed an increase in errors and corrections with increasing syntactic complexity, indicating a successful experimental manipulation. In the ERPs following scene onset, syntactic complexity variations were found in a P300-like component (‘S’/‘NP’>‘W’) and a fronto-central negativity (linear increase with syntactic complexity). In addition, the scene could display two actions - unpredictable for the participant, as the disambiguation occurred only later in the animation. Time-locked to the moment of visual disambiguation of the action and thus the verb, we observed another P300 component (‘S’>‘NP’/‘W’). The data show for the first time evidence of sensitivity to syntactic planning within the P300 time window, time-locked to visual events critical of syntactic planning. We discuss the findings in the light of current syntactic planning views.  相似文献   

18.
Research needs a balance of risk‐taking in “breakthrough projects” and gradual progress. For building a sustainable knowledge base, it is indispensable to provide support for both. Subject Categories: Careers, Economics, Law & Politics, Science Policy & Publishing

Science is about venturing into the unknown to find unexpected insights and establish new knowledge. Increasingly, academic institutions and funding agencies such as the European Research Council (ERC) explicitly encourage and support scientists to foster risky and hopefully ground‐breaking research. Such incentives are important and have been greatly appreciated by the scientific community. However, the success of the ERC has had its downsides, as other actors in the funding ecosystem have adopted the ERC’s focus on “breakthrough science” and respective notions of scientific excellence. We argue that these tendencies are concerning since disruptive breakthrough innovation is not the only form of innovation in research. While continuous, gradual innovation is often taken for granted, it could become endangered in a research and funding ecosystem that places ever higher value on breakthrough science. This is problematic since, paradoxically, breakthrough potential in science builds on gradual innovation. If the value of gradual innovation is not better recognized, the potential for breakthrough innovation may well be stifled.
While continuous, gradual innovation is often taken for granted, it could become endangered in a research and funding ecosystem that places ever higher value on breakthrough science.
Concerns that the hypercompetitive dynamics of the current scientific system may impede rather than spur innovative research have been voiced for many years (Alberts et al, 2014). As performance indicators continue to play a central role for promotions and grants, researchers are under pressure to publish extensively, quickly, and preferably in high‐ranking journals (Burrows, 2012). These dynamics increase the risk of mental health issues among scientists (Jaremka et al, 2020), dis‐incentivise relevant and important work (Benedictus et al, 2016), decrease the quality of scientific papers (Sarewitz, 2016) and induce conservative and short‐term thinking rather than risk‐taking and original thinking required for scientific innovation (Alberts et al, 2014; Fochler et al, 2016). Against this background, strong incentives for fostering innovative and daring research are indispensable.  相似文献   

19.
Why are some scientific disciplines, such as sociology and psychology, more fragmented into conflicting schools of thought than other fields, such as physics and biology? Furthermore, why does high fragmentation tend to coincide with limited scientific progress? We analyzed a formal model where scientists seek to identify the correct answer to a research question. Each scientist is influenced by three forces: (i) signals received from the correct answer to the question; (ii) peer influence; and (iii) noise. We observed the emergence of different macroscopic patterns of collective exploration, and studied how the three forces affect the degree to which disciplines fall apart into divergent fragments, or so-called “schools of thought”. We conducted two simulation experiments where we tested (A) whether the three forces foster or hamper progress, and (B) whether disciplinary fragmentation causally affects scientific progress and vice versa. We found that fragmentation critically limits scientific progress. Strikingly, there is no effect in the opposite causal direction. What is more, our results shows that at the heart of the mechanisms driving scientific progress we find (i) social interactions, and (ii) peer disagreement. In fact, fragmentation is increased and progress limited if the simulated scientists are open to influence only by peers with very similar views, or when within-school diversity is lost. Finally, disciplines where the scientists received strong signals from the correct answer were less fragmented and experienced faster progress. We discuss model’s implications for the design of social institutions fostering interdisciplinarity and participation in science.  相似文献   

20.
Over the past decade, biomarker discovery has become a key goal in psychiatry to aid in the more reliable diagnosis and prognosis of heterogeneous psychiatric conditions and the development of tailored therapies. Nevertheless, the prevailing statistical approach is still the mean group comparison between “cases” and “controls,” which tends to ignore within-group variability. In this educational article, we used empirical data simulations to investigate how effect size, sample size, and the shape of distributions impact the interpretation of mean group differences for biomarker discovery. We then applied these statistical criteria to evaluate biomarker discovery in one area of psychiatric research—autism research. Across the most influential areas of autism research, effect size estimates ranged from small (d = 0.21, anatomical structure) to medium (d = 0.36 electrophysiology, d = 0.5, eye-tracking) to large (d = 1.1 theory of mind). We show that in normal distributions, this translates to approximately 45% to 63% of cases performing within 1 standard deviation (SD) of the typical range, i.e., they do not have a deficit/atypicality in a statistical sense. For a measure to have diagnostic utility as defined by 80% sensitivity and 80% specificity, Cohen’s d of 1.66 is required, with still 40% of cases falling within 1 SD. However, in both normal and nonnormal distributions, 1 (skewness) or 2 (platykurtic, bimodal) biologically plausible subgroups may exist despite small or even nonsignificant mean group differences. This conclusion drastically contrasts the way mean group differences are frequently reported. Over 95% of studies omitted the “on average” when summarising their findings in their abstracts (“autistic people have deficits in X”), which can be misleading as it implies that the group-level difference applies to all individuals in that group. We outline practical approaches and steps for researchers to explore mean group comparisons for the discovery of stratification biomarkers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号