首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Non‐technical summaries of research projects allow tracking the numbers and purpose of animal experiments related to SARS‐CoV2 research so as to provide greater transparency on animal use. Subject Categories: Economics, Law & Politics, Pharmacology & Drug Discovery, Science Policy & Publishing

The COVID‐19 pandemic has accelerated biomedical research and drug development to an unprecedented pace. Governments worldwide released emergency funding for biomedical research that allowed scientists to focus on COVID‐19 and related drug and vaccine development. As a result, a flood of scientific articles on SARS‐CoV‐2 and COVID‐19 was published since early 2020. More importantly though, within less than 2 years, scientists in academia and industry developed vaccines against the virus from scratch: Several vaccines have now received regulatory approval and are being mass produced to immunize the human population worldwide.This colossal success of science rests in large part on the shoulders of animals that were used in basic and pre‐clinical research and regulatory testing. Notwithstanding, animal experimentation has remained a highly controversial and heated topic between advocates for research and animal rights activists. During the past decades, European policymakers responded to the debate by enacting stricter regulations, which inevitably has increased the bureaucratic hurdles for experimentation on animals. Scientists have for long spoken out against this additional burden, arguing that both basic and translational researches to improve human health crucially relies on animal experimentation—as the COVID‐19 pandemic aptly demonstrated (Genzel et al, 2020).  相似文献   

2.
Even if the predominant model of science communication with the public is now based on dialogue, many experts still adhere to the outdated deficit model of informing the public. Subject Categories: Genetics, Gene Therapy & Genetic Disease, S&S: History & Philosophy of Science, S&S: Ethics

During the past decades, public communication of science has undergone profound changes: from policy‐driven to policy‐informing, from promoting science to interpreting science, and from dissemination to interaction (Burgess, 2014). These shifts in communication paradigms have an impact on what is expected from scientists who engage in public communication: they should be seen as fellow citizens rather than experts whose task is to increase scientific literacy of the lay public. Many scientists engage in science communication, because they see this as their responsibility toward society (Loroño‐Leturiondo & Davies, 2018). Yet, a significant proportion of researchers still “view public engagement as an activity of talking to rather than with the public” (Hamlyn et al, 2015). The highly criticized “deficit model” that sees the role of experts as educating the public to mitigate skepticism still persists (Simis et al, 2016; Suldovsky, 2016).Indeed, a survey we conducted among experts in training seems to corroborate the persistence of the deficit model even among younger scientists. Based on these results and our own experience with organizing public dialogues about human germline gene editing (Box 1), we discuss the implications of this outdated science communication model and an alternative model of public engagement, that aims to align science with the needs and values of the public.Box 1

The DNA‐dialogue project

The Dutch DNA‐dialogue project invited citizens to discuss and form opinions about human germline gene editing. During 2019 and 2020, this project organized twenty‐seven dialogues with professionals, such as embryologists and midwives, and various lay audiences. Different scenarios of a world in 2039 (https://www.rathenau.nl/en/making‐perfect‐lives/discussing‐modification‐heritable‐dna‐embryos) served as the starting point. Participants expressed their initial reactions to these scenarios with emotion‐cards and thereby explored the values they themselves and other participants deemed important as they elaborated further. Starting each dialogue in this way provides a context that enables everyone to participate in dialogue about complex topics such as human germline gene editing and demonstrates that scientific knowledge should not be a prerequisite to participate.An important example of “different” relevant knowledge surfaced during a dialogue with children between 8 and 12 years in the Sophia Children’s Hospital in Rotterdam (Fig 1). Most adults in the DNA‐dialogues accepted human germline gene modification for severe genetic diseases, as they wished the best possible care and outcome for their children. The children at Sophia, however, stated that they would find it terrible if their parents had altered something about them before they had been born; their parents would not even have known them. Some children went so far to say they would no longer be themselves without their genetic condition, and that their condition had also given them experiences they would rather not have missed.Open in a separate windowFigure 1 Children participating in a DNA‐dialogue meeting. Photographed by Levien Willemse.  相似文献   

3.
4.
Lessons from implementing quality control systems in an academic research consortium to improve Good Scientific Practice and reproducibility. Subject Categories: Microbiology, Virology & Host Pathogen Interaction, Science Policy & Publishing

Low reproducibility rates within biomedical research negatively impact productivity and translation. One promising approach to enhance the transfer of robust results from preclinical research into clinically relevant and transferable data is the systematic implementation of quality measures in daily laboratory routines.
Although many universities expect their scientists to adhere to GSPs, they often neither systematically support, nor monitor the quality of their research activities.
Today''s fast‐evolving research environment needs effective quality measures to ensure reproducibility and data integrity (Macleod et al, 2014; Begley et al, 2015; Begley & Ioannidis, 2015; Baker, 2016). Academic research institutions and laboratories may be as committed to good scientific practices (GSPs) as their counterparts in the biotech and pharmaceutical industry but operate largely without clearly defined standards (Bespalov et al, 2021; Emmerich et al, 2021). Although many universities expect their scientists to adhere to GSPs, they often neither systematically support, nor monitor the quality of their research activities. Peer review of publications is still regarded as the primary validation of quality control in academic research. However, reviewers only assess work after it has been performed—often over years—and interventions in the experimental process are thus no longer possible.The reasons for the lack of dedicated quality management (QM) implementations in academic laboratories include an anticipated overload of regulatory tasks that could negatively affect productivity, concerns about the loss of scientific freedom, and importantly, limited resources in academia and academic funding schemes.  相似文献   

5.
Open Science calls for transparent science and involvement of various stakeholders. Here are examples of and advice for meaningful stakeholder engagement. Subject Categories: Economics, Law & Politics, History & Philosophy of Science

The concepts of Open Science and Responsible Research and Innovation call for a more transparent and collaborative science, and more participation of citizens. The way to achieve this is through cooperation with different actors or “stakeholders”: individuals or organizations who can contribute to, or benefit from research, regardless of whether they are researchers themselves or not. Examples include funding agencies, citizens associations, patients, and policy makers (https://aquas.gencat.cat/web/.content/minisite/aquas/publicacions/2018/how_measure_engagement_research_saris1_aquas2018.pdf). Such cooperation is even more relevant in the current, challenging times—even apart from a global pandemic—when pseudo‐science, fake news, nihilist attitudes, and ideologies too often threaten social and technological progress enabled by science. Stakeholder engagement in research can inform and empower citizens, help render research more socially acceptable, and enable policies grounded on evidence‐based knowledge. Beyond, stakeholder engagement is also beneficial to researchers and to research itself. In a recent survey, the majority of scientists reported benefits from public engagement (Burns et al, 2021). This can include increased mutual trust and mutual learning, improved social relevance of research, and improved adoption of results and knowledge (Cottrell et al, 2014). Finally, stakeholder engagement is often regarded as an important factor to sustain public investment in the life sciences (Burns et al, 2021).
Stakeholder engagement in research can inform and empower citizens, help render research more socially acceptable and enable policies grounded on evidence‐based knowledge
Here, we discuss different levels of stakeholder engagement by way of example, presenting various activities organized by European research institutions. Based on these experiences, we propose ten reflection points that we believe should be considered by the institutions, the scientists, and the funding agencies to achieve meaningful and impactful stakeholder engagement.  相似文献   

6.
Lazy hazy days     
Scientists have warned about the looming climate crisis for decades, but the world has been slow to act. Are we in danger of making a similar mistake, by neglecting the dangers of other climactic catastrophes? Subject Categories: Biotechnology & Synthetic Biology, Economics, Law & Politics, Evolution & Ecology

On one of my trips to Antarctica, I was enjoined to refer not to “global warming” or even to “climate change.” The former implies a uniform and rather benign process, while the second suggests just a transition from one state to another and seems to minimize all the attendant risks to survival. Neither of these terms adequately or accurately describes what is happening to our planet''s climate system as a result of greenhouse gas emissions; not to mention the effects of urbanization, intensive agriculture, deforestation, and other consequences of human population growth. Instead, I was encouraged to use the term “climate disruption,” which embraces the multiplicity of events taking place, some of them still hard to model, that are altering the planetary ecosystem in dramatic ways.With climate disruption now an urgent and undeniable reality, policymakers are finally waking up to the threats that scientists have been warning about for decades. They have accepted the need for action (UNFCCC Conference of the Parties, 2021), even if the commitment remains patchy or lukewarm. But to implement all the necessary changes is a massive undertaking, and it is debatable whether we have enough time left. The fault lies mostly with those who resisted change for so long, hoping the problem would just go away, or denying that it was happening at all. The crisis situation that we face today is because the changes needed simply cannot be executed overnight. It will take time for the infrastructure to be put in place, whether for renewable electricity, for the switch to carbon‐neutral fuels, for sustainable agriculture and construction, and for net carbon capture. If the problems worsen, requiring even more drastic action, at least we do have a direction of travel, though we would be starting off from an even more precarious situation.However, given the time that it has taken—and will still take—to turn around the juggernaut of our industrial society, are we in danger of making the same mistakes all over again, by ignoring the risks of the very opposite process happening in our lifetime? The causes of historic climate cooling are still debated, and though we have fairly convincing evidence regarding specific, sudden events, there is no firm consensus on what is behind longer‐term and possibly cyclical changes in the climate.The two best‐documented examples are the catastrophe of 536–540 AD and the effects of the Laki Haze of 1783–1784. The cause of the 536–540 event is still debated, but is widely believed to have been one or more massive volcanic eruptions that created a global atmospheric dust‐cloud, resulting in a temperature drop of up to 2°C with concomitant famines and societal crises (Toohey et al, 2016; Helama et al, 2018). The Laki Haze was caused by the massive outpouring of sulfurous fumes from the Laki eruption in Iceland. Its effects on the climate, though just as immediate, were less straightforward. The emissions, combined with other meteorological anomalies, produced a disruption of the jetstream, as well as other localized effects. In northwest Europe, the first half of the summer of 1783 was exceptionally hot, but the following winters were dramatically cold, and the mean temperature across much of the northern hemisphere is estimated to have dropped by around 1.3°C for 2–3 years (Thordarson & Self, 2003). In Iceland itself, as well as much of western and northern Europe, the effects were even more devastating, with widespread crop failures and deaths of both livestock and humans exacerbated by the toxicity of the volcanic gases (Schmidt et al, 2011).Other volcanic events in recorded time have produced major climactic disturbances, such as the 1816 Tambora eruption in Indonesia, which resulted in “the year without a summer,” marked by temperature anomalies of up to 4°C (Fasullo et al, 2017), again precipitating worldwide famine. The 1883 Krakatoa eruption produced similar disruption, albeit of a lesser magnitude, though the effects are proposed to have been much longer lasting (Gleckler et al, 2006).Much more scientifically challenging is the so‐called Little Ice Age in the Middle Ages, approximately from 1250 to 1700 AD, when global temperatures were significantly lower than in the preceding and following centuries. It was marked by particularly frigid and prolonged winters in the northern hemisphere. There is no strong consensus as to its cause(s) or even its exact dates; nor even that it can be considered a global‐scale event rather than a summation of several localized phenomena. A volcanic eruption in 1257 with similar effects to the one of 1816 has been suggested as an initiating event. Disruption of the oceanic circulation system resulting from prolonged anomalies in solar activity is another possible explanation (Lapointe & Bradley, 2021). Nevertheless, and despite an average global cooling of < 1°C, the effects on global agriculture, settlement, migration and trade, pandemics such as the Black Death and perhaps even wars and revolutions, were profound.Once or twice in the past century, we have faced devastating wars, tsunamis and pandemics that seemed to come out of the blue and exacted massive tolls on humanity. From the most recent of each of these, there is a growing realization that, although these events are rare and poorly predictable, we can greatly limit the damage if we prepare properly. Devoting a small proportion of our resources over time, we can build the infrastructure and the mechanisms to cope, when these disasters do eventually strike.Without abandoning any of the emergency measures to combat anthropogenic warming, I believe that the risk of climate cooling needs to be addressed in the same way. The infrastructure for burning fossil fuels needs to be mothballed, not destroyed. Carbon capture needs to be implemented in a way that is rapidly reversible, if this should ever be needed. Alternative transportation routes need to be planned and built in case existing ones become impassable due to ice or flooding. Properly insulated buildings are not just a way of saving energy. They are essential for survival in extreme cold, as those of us who live in the Arctic countries are well aware—but many other regions also experience severe winters, for which we should all prepare.Biotechnology needs to be set to work to devise ways of mitigating the effects of sudden climactic events such as the Laki Haze or the Tambora and Krakatoa eruptions, as well as longer‐term phenomena like the Little Ice Age. Could bacteria be used, for example, to detoxify and dissipate a sulfuric aerosol such as the one generated by the Laki eruption? Methane is generally regarded as a major contributor to the greenhouse effect, but it is short‐lived in the atmosphere. So, could methanogens somehow be harnessed to bring about a temporary rise in global temperatures to offset short‐term cooling effects of a volcanic dust‐cloud?We already have a global seed bank in Svalbard (Asdal & Guarino, 2018): It might easily be expanded to include a greater representation of cold‐resistant varieties of the world''s crop plants that might one day be vital to human survival. And, the experience of the Laki Haze indicates a need for varieties capable of withstanding acid rains and other volcanic pollutants, as well as drought and water saturation. An equivalent (embryo) bank for strains of agriculturally important animals potentially threatened by the effects of abrupt cooling of the climate or catastrophic toxification of the atmosphere is also worth considering.It has generally been thought impractical and pointless to prepare for even rarer events, such as cometary impacts, but events that have occurred repeatedly in recorded history and over an even longer time scale (Helama et al, 2021) are likely to happen again. We should and can be better prepared. This is not to say that we should pay attention to every conspiracy theorist or crank, or paid advocates for energy corporations that seek short‐term profits at the expense of long‐term survival, but the dangers of climate disruption of all kinds are too great to ignore. Instead of our current rather one‐dimensional thinking, we need an “all‐risks” approach to the subject: learning from the past and the present to prepare for the future.  相似文献   

7.
Academic Core Facilities are optimally situated to improve the quality of preclinical research by implementing quality control measures and offering these to their users. Subject Categories: Methods & Resources, Science Policy & Publishing

During the past decade, the scientific community and outside observers have noted a concerning lack of rigor and transparency in preclinical research that led to talk of a “reproducibility crisis” in the life sciences (Baker, 2016; Bespalov & Steckler, 2018; Heddleston et al, 2021). Various measures have been proposed to address the problem: from better training of scientists to more oversight to expanded publishing practices such as preregistration of studies. The recently published EQIPD (Enhancing Quality in Preclinical Data) System is, to date, the largest initiative that aims to establish a systematic approach for increasing the robustness and reliability of biomedical research (Bespalov et al, 2021). However, promoting a cultural change in research practices warrants a broad adoption of the Quality System and its underlying philosophy. It is here that academic Core Facilities (CF), research service providers at universities and research institutions, can make a difference.It is fair to assume that a significant fraction of published data originated from experiments that were designed, run, or analyzed in CFs. These academic services play an important role in the research ecosystem by offering access to cutting‐edge equipment and by developing and testing novel techniques and methods that impact research in the academic and private sectors alike (Bikovski et al, 2020). Equipment and infrastructure are not the only value: CFs employ competent personnel with profound knowledge and practical experience of the specific field of interest: animal behavior, imaging, crystallography, genomics, and so on. Thus, CFs are optimally positioned to address concerns about the quality and robustness of preclinical research.  相似文献   

8.
Research needs a balance of risk‐taking in “breakthrough projects” and gradual progress. For building a sustainable knowledge base, it is indispensable to provide support for both. Subject Categories: Careers, Economics, Law & Politics, Science Policy & Publishing

Science is about venturing into the unknown to find unexpected insights and establish new knowledge. Increasingly, academic institutions and funding agencies such as the European Research Council (ERC) explicitly encourage and support scientists to foster risky and hopefully ground‐breaking research. Such incentives are important and have been greatly appreciated by the scientific community. However, the success of the ERC has had its downsides, as other actors in the funding ecosystem have adopted the ERC’s focus on “breakthrough science” and respective notions of scientific excellence. We argue that these tendencies are concerning since disruptive breakthrough innovation is not the only form of innovation in research. While continuous, gradual innovation is often taken for granted, it could become endangered in a research and funding ecosystem that places ever higher value on breakthrough science. This is problematic since, paradoxically, breakthrough potential in science builds on gradual innovation. If the value of gradual innovation is not better recognized, the potential for breakthrough innovation may well be stifled.
While continuous, gradual innovation is often taken for granted, it could become endangered in a research and funding ecosystem that places ever higher value on breakthrough science.
Concerns that the hypercompetitive dynamics of the current scientific system may impede rather than spur innovative research have been voiced for many years (Alberts et al, 2014). As performance indicators continue to play a central role for promotions and grants, researchers are under pressure to publish extensively, quickly, and preferably in high‐ranking journals (Burrows, 2012). These dynamics increase the risk of mental health issues among scientists (Jaremka et al, 2020), dis‐incentivise relevant and important work (Benedictus et al, 2016), decrease the quality of scientific papers (Sarewitz, 2016) and induce conservative and short‐term thinking rather than risk‐taking and original thinking required for scientific innovation (Alberts et al, 2014; Fochler et al, 2016). Against this background, strong incentives for fostering innovative and daring research are indispensable.  相似文献   

9.
Segregation of the largely non‐homologous X and Y sex chromosomes during male meiosis is not a trivial task, because their pairing, synapsis, and crossover formation are restricted to a tiny region of homology, the pseudoautosomal region. In humans, meiotic X‐Y missegregation can lead to 47, XXY offspring, also known as Klinefelter syndrome, but to what extent genetic factors predispose to paternal sex chromosome aneuploidy has remained elusive. In this issue, Liu et al (2021) provide evidence that deleterious mutations in the USP26 gene constitute one such factor.Subject Categories: Cell Cycle, Development & Differentiation, Molecular Biology of Disease

Analyses of Klinefelter syndrome patients and Usp26‐deficient mice have revealed a genetic influence on age‐dependent sex chromosome missegregation during male meiosis.

Multilayered mechanisms have evolved to ensure successful X‐Y recombination, as a prerequisite for subsequent normal chromosome segregation. These include a distinct chromatin structure as well as specialized proteins on the pseudoautosomal region (Kauppi et al, 2011; Acquaviva et al, 2020). Even so, X‐Y recombination fails fairly often, especially in the face of even modest meiotic perturbations. It is perhaps not surprising then that X‐Y aneuploidy—but not autosomal aneuploidy—in sperm increases with age (Lowe et al, 2001; Arnedo et al, 2006), as does the risk of fathering sons with Klinefelter syndrome (De Souza & Morris, 2010).Klinefelter syndrome is one of the most common aneuploidies in liveborn individuals (Thomas & Hassold, 2003). While most human trisomies result from errors in maternal chromosome segregation, this is not the case for Klinefelter syndrome, where the extra X chromosome is equally likely to be of maternal or paternal origin (Thomas & Hassold, 2003; Arnedo et al, 2006). Little is known about genetic factors in humans that predispose to paternal XY aneuploidy, i.e., that increase the risk of fathering Klinefelter syndrome offspring. The general notion has been that paternally derived Klinefelter syndrome arises stochastically. However, fathers of Klinefelter syndrome patients have elevated rates of XY aneuploid sperm (Lowe et al, 2001; Arnedo et al, 2006), implying a persistent defect in spermatogenesis in these individuals rather than a one‐off meiotic error.To identify possible genetic factors contributing to Klinefelter syndrome risk, Liu et al (2021) performed whole‐exome sequencing in a discovery cohort of > 100 Klinefelter syndrome patients, followed by targeted sequencing in a much larger cohort of patients and controls, as well as Klinefelter syndrome family trios. The authors homed in on a mutational cluster (“mutated haplotype”) in ubiquitin‐specific protease 26 (USP26), a testis‐expressed gene located on the X chromosome. Effects of this gene’s loss of function (Usp26‐deficient mice) on spermatogenesis have recently been independently reported by several laboratories and ranged from no detectable fertility phenotype (Felipe‐Medina et al, 2019) to subfertility/sterility associated with both meiotic and spermiogenic defects (Sakai et al, 2019; Tian et al, 2019). With their Klinefelter syndrome cohort findings, Liu et al (2021) also turned to Usp26 null mice, paying particular attention to X‐Y chromosome behavior and—unlike earlier mouse studies—including older mice in their analyses. They found that Usp26‐deficient animals often failed to achieve stable pairing and synapsis of X‐Y chromosomes in spermatocytes, produced XY aneuploid sperm at an abnormally high frequency, and sometimes also sired XXY offspring. Importantly, these phenotypes only occurred at an advanced age: XY aneuploidy was seen in six‐month‐old, but not two‐month‐old Usp26‐deficient males. Moreover, levels of spindle assembly checkpoint (SAC) proteins also reduced in six‐month‐old males. Thus, in older Usp26 null mice, the combination of less efficient X‐Y pairing and less stringent SAC‐mediated surveillance of faithful chromosome segregation allows for sperm aneuploidy, providing another example of SAC leakiness in males (see Lane & Kauppi, 2019 for discussion).Liu et al’s analyses shed some light on what molecular mechanisms may be responsible for the reduced efficiency of X‐Y pairing and synapsis in Usp26‐deficient spermatocytes. USP26 codes for a deubiquitinating enzyme that has several substrates in the testis. Because USP26 prevents degradation of these substrates, their levels should be downregulated in Usp26 null testes. Liu et al (2021) show that USP26 interacts with TEX11, a protein required for stable pairing and normal segregation of the X and Y chromosomes in mouse meiosis (Adelman & Petrini, 2008). USP26 can de‐ubiquitinate TEX11 in vitro, and in Usp26 null testes, TEX11 was almost undetectable. It is worth noting that USP26 has several other known substrates, including the androgen receptor (AR), and therefore, USP26 disruption likely contributes to compromised spermatogenesis via multiple mechanisms. For example, AR signaling‐dependent hormone levels are misregulated in Usp26 null mice (Tian et al, 2019).The sex chromosome phenotypes observed in Usp26 null mice predict that men with USP26 mutations may be fertile, but producing XY aneuploid sperm at an abnormally high frequency, and that spermatogenic defects should increase with age (Fig 1). These predictions were testable, because the mutated USP26 haplotype, present in 13% of Klinefelter syndrome patients, was reasonably common also in fertile men (7–10%). Indeed, sperm XY aneuploidy was substantially higher in fertile men with the mutated USP26 haplotype than in those without USP26 mutations. Some mutation carriers produced > 4% aneuploid sperm. Moreover, age‐dependent oligospermia was also found associated with the mutated USP26 haplotype.Open in a separate windowFigure 1Mutated USP26 as genetic risk factor for age‐dependent X‐Y defects in spermatogenesisMouse genetics demonstrate that deleterious USP26 mutations lead to less‐efficient X‐Y pairing and recombination with advancing age. Concomitant decrease of spindle assembly checkpoint (SAC) protein levels leads to less‐efficient elimination of metaphase I spermatocytes that contain misaligned X and Y chromosomes. This allows for the formation of XY aneuploid sperm in older individuals and subsequently increased age‐dependent risk for fathering Klinefelter syndrome (KS) offspring, two correlates also observed in human USP26 mutation carriers. At the same time, oligospermia/subfertility also increases with advanced age in both Usp26‐deficient mice and USP26 mutation‐carrying men, tempering Klinefelter syndrome offspring risk but also decreasing fecundity.As indicated by its prevalence in the normal control population, the USP26 mutated haplotype is not selected against in the human population. With > 95% of sperm in USP26 mutation carriers having normal haploid chromosomal composition, the risk of producing (infertile) Klinefelter syndrome offspring remains modest, likely explaining why USP26 mutant alleles are not eliminated. Given that full Usp26 disruption barely affects fertility of male mice during their prime reproductive age (Felipe‐Medina et al, 2019; Tian et al, 2019; Liu et al, 2021), there is little reason to assume strong negative selection against USP26 variants in humans. USP26 as the first‐ever genetic risk factor predisposing to sperm X‐Y aneuploidy and paternal origin Klinefelter syndrome offspring in humans, as uncovered by Liu et al, may be just one of many. 90% of Liu et al’s Klinefelter syndrome cases were not associated with USP26 mutations. But even in the age of genomics, discovery of Klinefelter syndrome risk factors is not straightforward, since most sperm of risk mutation carriers will not be XY aneuploid and thus not give rise to Klinefelter syndrome offspring. In addition, as Usp26 null mice demonstrate, both genetic and non‐genetic modifiers impact on penetrance of the XY aneuploidy phenotype: Spermatogenesis in the absence of Usp26 was impaired in the DBA/2 but not the C57BL/6 mouse strain background (Sakai et al, 2019), and in older mice, there was substantial inter‐individual variation in the severity of the X‐Y defect (Liu et al, 2021). In human cohorts, genetic and non‐genetic modifiers are expected to blur the picture even more.Future identification of sex chromosome aneuploidy risk factors has human health implications beyond Klinefelter syndrome. Firstly, XXY incidence is not only relevant for Klinefelter syndrome livebirths—it also contributes to stillbirths and spontaneous abortions, at a 4‐fold higher rate than to livebirths (Thomas & Hassold, 2003). Secondly, persistent meiotic X‐Y defects can, over time, result in oligospermia and even infertility. Since the mean age of first‐time fathers is steadily rising and currently well over 30 years in many Western countries, age‐dependent spermatogenic defects will be of ever‐increasing clinical relevance.  相似文献   

10.
The response by the author. Subject Categories: S&S: Economics & Business, S&S: Ethics

I thank Michael Bronstein and Sophia Vinogradov for their interest and comments. I would like to respond to a few of their points.First, I agree with the authors that empirical studies should be conducted to validate any approaches to prevent the spread of misinformation before their implementation. Nonetheless, I think that the ideas I have proposed may be worth further discussion and inspire empirical studies to test their effectiveness.Second, the authors warn that informing about the imperfections of scientific research may undermine trust in science and scientists, which could result in higher vulnerability to online health misinformation (Roozenbeek et al, 2020; Bronstein & Vinogradov, 2021). I believe that transparency about limitations and problems in research does not necessarily have to diminish trust in science and scientists. On the contrary, as Veit et al put it, “such honesty… is a prerequisite for maintaining a trusting relationship between medical institutions (and practitioners) and the public” (Veit et al, 2021). Importantly, to give an honest picture of scientific research, information about its limitations should be put in adequate context. In particular, the public also should be aware that “good science” is being done by many researchers; we do have solid evidence of effectiveness of many medical interventions; and efforts are being taken to address the problems related to quality of research.Third, Bronstein and Vinogradov suggest that false and dangerous information should be censored. I agree with the authors that “[c]ensorship can prevent individuals from being exposed to false and potentially dangerous ideas” (Bronstein & Vinogradov, 2021). I also recognize that some information is false beyond any doubt and its spread may be harmful. What I am concerned about are, among others, the challenges related to defining what is dangerous and false information and limiting censorship only to this kind of information. For example, on what sources should decisions to censor be based and who should make such decisions? Anyone, whether an individual or an organization, with a responsibility to censor information will likely not only be prone to mistakes, but also to abuses of power to foster their interests. Do the benefits we want to achieve by censorship outweigh the potential risks?Fourth, we need rigorous empirical studies examining the actual impact of medical misinformation. What exactly are the harms we try to protect against and what is their scale? This information is necessary to choose proportionte and effective measures to reduce the harms. Bronstein and Vinogradov give an example of a harm which may be caused by misinformation—an increase in methanol poisoning in Iran. Yet, as noticed by the authors, misinformation is not the sole factor in this case; there are also cultural and other contexts (Arasteh et al, 2020; Bronstein & Vinogradov, 2021). Importantly, the methods of studies exploring the effects of misinformation should be carefully elaborated, especially when study participants are asked to self‐report. A recent study suggests that some claims about the prevalence of dangerous behaviors, such as drinking bleach, which may have been caused by misinformation are largely exaggerated due to the presence of problematic respondents in surveys (preprint: Litman et al, 2021).Last but not least, I would like to call attention to the importance of how veracity of information is determined in empirical studies on misinformation. For example, in a study of Roozenbeek et al, cited by Bronstein and Vinogradov, the World Health Organization (WHO) was used as reliable source of information, which raises questions. For instance, Roozenbeek et al (2020) used a statement “the coronavirus was bioengineered in a military lab in Wuhan” as an example of false information, relying on the judgment of the WHO found on its “mythbusters” website (Roozenbeek et al, 2020). Yet, is there a solid evidence to claim that this statement is false? At present, at least some scientists declare that we cannot rule out that the virus was genetically manipulated in a laboratory (Relman, 2020; Segreto & Deigin, 2020). Interestingly, the WHO also no longer excludes such a possibility and has launched an investigation on this issue (https://www.who.int/health‐topics/coronavirus/origins‐of‐the‐virus, https://www.who.int/emergencies/diseases/novel‐coronavirus‐2019/media‐resources/science‐in‐5/episode‐21‐‐‐covid‐19‐‐‐origins‐of‐the‐sars‐cov‐2‐virus); the information about the laboratory origin of the virus being false is no longer present on the WHO “mythbusters” website (https://www.who.int/emergencies/diseases/novel‐coronavirus‐2019/advice‐for‐public/myth‐busters). Against this backdrop, some results of the study by Roozenbeek et al (2020) seem misleading. In particular, the perception of the reliability of the statement about bioengineered virus by study participants in Roozenbeek et al (2020) does not reflect the susceptibility to misinformation, as intended by the researchers, but rather how the respondents perceive reliability of uncertain information.I hope that discussion and research on these and related issues will continue.  相似文献   

11.
Debates about the source of antibodies and their use are confusing two different issues. A ban on life immunization would have no repercussions on the quality of antibodies. Subject Categories: S&S: Economics & Business, Methods & Resources, Chemical Biology

There is an ongoing debate on how antibodies are being generated, produced and used (Gray, 2020; Marx, 2020). Or rather, there are two debates, which are not necessarily related to each other. The first one concerns the quality of antibodies used in scientific research and the repercussions for the validity of results (Bradbury & Pluckthun, 2015). The second debate is about the use of animals to generate and produce antibodies. Although these are two different issues, we observe that the debates have become entangled with arguments for one topic incorrectly being used to motivate the other and vice versa. This is not helpful, and we should disentangle the knot.Polyclonal antibodies are being criticized because they suffer from cross‐reactivity, high background and batch‐to‐batch variation (Bradbury & Pluckthun, 2015). Monoclonal antibodies produced from hybridomas are criticized because they often lack specificity owing to genetic heterogeneity introduced during hybridoma generation that impairs the quality of the monoclonals (Bradbury et al, 2018). These are valid criticisms and producing antibodies in a recombinant manner will, indeed, help to improve quality and specificity. But a mediocre antibody will remain a mediocre antibody, no matter how it is produced. Recombinant methods will just produce a mediocre antibody more consistently.Getting a good antibody is not easy and much depends on the nature and complexity of the antigen. And low‐quality antibodies are often the result of poor screening, poor quality control, incomplete characterization and the lack of international standards. Nevertheless, the technologies to ensure good selection and to guarantee consistent quality are much more advanced than a decade ago, and scientists and antibody producers should implement these to deliver high‐quality antibodies. Whether antibodies are generated by animal immunization or from naïve or synthetic antibody libraries is less relevant; they can all be produced recombinantly, and screening and characterization are needed in all cases to determine quality, and if the antibody is fit for purpose.But criticisms on the quality of many antibodies and pleas for switching to recombinant production of antibodies cannot be mixed up with a call to ban animal immunization. The EU Reference Laboratory for Alternatives to Animal Testing (EURL ECVAM) recently published a recommendation to stop using animals for generating and producing antibodies for scientific, diagnostic and even therapeutic applications (EURL ECVAM, 2020). This recommendation is mainly supported by scientists who seem to be biased towards synthetic antibody technology for various reasons. Their main argument is that antibodies derived from naïve or synthetic libraries are a valid (and exclusive) alternative. But are they?One can certainly select antibodies from non‐immune libraries, and, depending on the antigen and the type of application, these antibodies can be fit for purpose. In fact, a few of such antibodies have made it to the market as therapeutics, Adalimumab (Humira®) being a well‐known example. But up to now, the vast majority of antibodies continues to come from animal immunization (Lu et al, 2020). And there is a good reason for that. It is generally possible to generate a few positive hits in a naïve/synthetic library; and the more diverse the library, the more hits one is likely to get. But many decades of experience with immunization of animals—especially when they are outbred—shows that they generate larger amounts of antibodies with superior properties. And the more complex your antigen is, the more the balance swings towards animal immunization if you want to have a guarantee for success.There are different factors at work here. First, the immune system of mammals has evolved over millions of years to efficiently produce excellent antibodies against a very diverse range of antigens. Second, presenting the antigen multiple times in its desired (native) conformation to the animal immune system exploits the natural maturation process to fine‐tune the immune response against particular qualities. Another factor is that in vivo maturation seems to select against negative properties such as self‐recognition and aggregation. It also helps to select for important properties that go beyond mere molecular recognition (Jain et al, 2017). In industrial parlance, antibodies from animal immunization are more “developable” and have favourable biophysical properties (Lonberg, 2005). Indeed, the failure rate for antibodies selected from naïve or synthetic libraries is significantly higher.Of course, the properties of synthetic antibodies selected from non‐immune libraries can be further matured in vitro, for example by light chain shuffling or targeted mutagenesis of the complementarity determining region (CDR). While this method has become more sophisticated over the years, it remains a very complex and iterative process without guarantee that it produces a high‐quality antibody.Antibodies are an ever more important tool in scientific research and a growing area in human and veterinary therapeutics. Major therapeutic breakthroughs in immunology and oncology in the past decades are based on antibodies (Lu et al, 2020). The vast majority of these therapeutic antibodies were derived from animals. An identical picture appears when you look at the antibodies in fast‐track development to combat the current COVID‐19 crisis: again, the vast majority are either derived from patients or from animal immunizations. The same holds true for antibodies that are used in diagnostics and epidemiologic studies for COVID‐19.It is for that reason that we need the tools and methods that guarantee antibodies of the highest quality and provide the best chance for success. The COVID‐19 pandemic is only one illustration of this need. If we block access to these tools, both scientific research and society at large will be negatively impacted. We therefore should not limit ourselves to naïve and synthetic libraries. Animal immunization remains an inevitable method that needs to stay. But we all agree that these immunizations must be performed under best practice to further reduce the harm to animals.  相似文献   

12.
13.
A survey of academics in Germany shows a lack of and a great demand for training in leadership skills. Subject Categories: Careers, Science Policy & Publishing

Success and productivity in science is measured largely by the number of publications in scientific journals and the acquisition of third‐party funding to finance further research (Detsky, 2011). Consequently, as young researchers advance in their careers, they become highly trained in directly related skills, such as scientific writing, so as to increase their chances in securing publications and grants. Acquiring leadership skills, however, is often neglected as these do not contribute to the evaluation of scientific success (Detsky, 2011). Therefore, an early‐career researcher may become leader of a research group based on publication record and solicitation of third‐party funding, but without any training of leadership or team management skills (Lashuel, 2020). Leadership, in the context of academic research, requires a unique list of competencies, knowledge and skills in addition to “traditional” leadership skills (Anthony & Antony, 2017), such as managing change, adaptability, empathy, motivating individuals, and setting direction and vision among others. Academic leadership also requires promoting the research group’s reputation, networking, protecting staff autonomy, promoting academic credibility, and managing complexity (Anthony & Antony, 2017).  相似文献   

14.
Ethical challenges should be addressed before gene editing is made available to improve the immune response against emerging viruses. Subject Categories: S&S: Economics & Business, Genetics, Gene Therapy & Genetic Disease, Immunology

In 1881, Louis Pasteur proved the “germ theory of disease”, namely that microorganisms are responsible for causing a range of diseases. Following Pasteur’s and Robert Koch’s groundbreaking work on pathogens, further research during the 20th century elucidated how the immune system fends off disease‐causing microorganisms from a molecular perspective.The COVID‐19 pandemic has again focused scientific and public attention on immunology not the least owing to the race of employing vaccines to halt the spread of the virus. Although most countries have now started vaccination programs to immunize a large part of the world''s population, the process will take time, vaccines may not be available to everyone, and a number of unresolved issues remain including the potential contagiousness of vaccinated individuals and the duration of protection (Polack et al, 2020).It would therefore be extremely helpful from a public health perspective—and indeed lifesaving for those with elevated risk of developing severe course of the disease—if we could boost the human immune system by other means to better fight off SARS‐CoV‐2 and possibly other viruses. Recent studies showing that some individuals may be less susceptible to contract severe COVID‐19 depending on their genetic status support such visions (COVID‐19 Host Genetics Initiative, 2020). This could eventually inspire research projects on gene therapy with the aim of generally enhancing immunity against viral infections.
It would therefore be extremely helpful from a public health perspective […] if we could boost the human immune system by other means to better fight off SARS‐CoV‐2 …
The idea of genetically enhancing the human immune response is not new and spread from academic circles to policymakers and the general public even before the pandemic, when He Jiankui announced in November 2018 the birth of genetically edited twins who, he claimed, were resistant to HIV. The public outcry was massive, not only because He violated standards of methodological rigor and research ethics, but also because of fundamental doubts about the wisdom and legitimacy of human germline manipulation (Schleidgen et al, 2020).Somatic gene therapy has been met with a less categorical rejection, but it has also been confronted with skepticism when major setbacks or untoward events occurred, such as the death of Jesse Gelsinger during an early clinical trial for gene therapy in 1999. Nonetheless, given the drastic impact the current pandemic has on so many lives, there may be a motivation to put concerns aside. In fact, even if we managed to get rid of COVID‐19 owing to vaccines—or at least to keep its infectiousness and mortality low—another virus will appear sooner or later; an improved resistance to viral pathogens—including coronaviruses—would be an important asset.Interventions to boost the immune system could in fact make use of either germline gene editing, as has been the case of the Chinese twins, or through somatic gene editing. The first requires time and only the next generation would potentially benefit while the latter could be immediately applied and theoretically used to deal with the ongoing COVID‐19 pandemic.
Interventions to boost the immune system could in fact make use of either germline gene editing, as has been the case of the Chinese twins, or through somatic gene editing.
  相似文献   

15.
The question of whether non-human animals are conscious is of fundamental importance. There are already good reasons to think that many are, based on evolutionary continuity and other considerations. However, the hypothesis is notoriously resistant to direct empirical test. Numerous studies have shown behaviour in animals analogous to consciously-produced human behaviour. Fewer probe whether the same mechanisms are in use. One promising line of evidence about consciousness in other animals derives from experiments on metamemory. A study by Hampton (Proc Natl Acad Sci USA 98(9):5359–5362, 2001) suggests that at least one rhesus macaque can use metamemory to predict whether it would itself succeed on a delayed matching-to-sample task. Since it is not plausible that mere meta-representation requires consciousness, Hampton’s study invites an important question: what kind of metamemory is good evidence for consciousness? This paper argues that if it were found that an animal had a memory trace which allowed it to use information about a past perceptual stimulus to inform a range of different behaviours, that would indeed be good evidence that the animal was conscious. That functional characterisation can be tested by investigating whether successful performance on one metamemory task transfers to a range of new tasks. The paper goes on to argue that thinking about animal consciousness in this way helps in formulating a more precise functional characterisation of the mechanisms of conscious awareness.  相似文献   

16.

Correction to: The EMBO Journal (2021) 40: e107786. DOI 10.15252/embj.2021107786 | Published online 8 June 2021The authors would like to add three references to the paper: Starr et al and Zahradník et al also reported that the Q498H or Q498R mutation has enhanced binding affinity to ACE2; and Liu et al reported on the binding of bat coronavirus to ACE2.Starr et al and Zahradník et al have now been cited in the Discussion section, and the following sentence has been corrected from:“According to our data, the SARS‐CoV‐2 RBD with Q498H increases the binding strength to hACE2 by 5‐fold, suggesting the Q498H mutant is more ready to interact with human receptor than the wildtype and highlighting the necessity for more strict control of virus and virus‐infected animals”.to“Here, according to our data and two recently published papers, the SARS‐CoV‐2 RBD with Q498H or Q498R increases the binding strength to hACE2 (Starr et al, 2020; Zahradník et al, 2021), suggesting the mutant with Q498H or Q498R is more ready to interact with human receptor than the wild type and highlighting the necessity for more strict control of virus and virus‐infected animals”.The Liu et al citation has been added to the following sentence:“In another paper published by our group recently, RaTG13 RBD was found to bind to hACE2 with much lower binding affinity than SARS‐CoV‐2 though RaTG13 displays the highest whole‐genome sequence identity (96.2%) with the SARS‐CoV‐2 (Liu et al, 2021)”.Additionally, the authors have added the GISAID accession IDs to the sequence names of the SARS‐CoV‐2 in two human samples (Discussion section). To make identification unambiguous, the sequence names have been updated from “SA‐lsf‐27 and SA‐lsf‐37” to “GISAID accession ID: EPI_ISL_672581 and EPI_ISL_672589”.Lastly, the authors declare in the Materials and Methods section that all experiments employed SARS‐CoV‐2 pseudovirus in cultured cells. These experiments were performed in a BSL‐2‐level laboratory and approved by Science and Technology Conditions Platform Office, Institute of Microbiology, Chinese Academy of Sciences.These changes are herewith incorporated into the paper.  相似文献   

17.
18.
19.
Commercial screening services for inheritable diseases raise concerns about pressure on parents to terminate “imperfect babies”. Subject Categories: S&S: Economics & Business, Molecular Biology of Disease

Nearly two decades have passed since the first draft sequences of the human genome were published at the eyewatering cost of nearly US$3 billion for the publicly funded project. Sequencing costs have dropped drastically since, and a range of direct‐to‐consumer genetics companies now offer partial sequencing of your individual genome in the US$100 price range, and whole‐genome sequencing for less than US$1,000.While such tests are mainly for personal peruse, there have also been substantial drops in price in clinical genome sequencing, which has greatly enabled the study of and screening for inheritable disorders. This has both advanced our understanding of these diseases in general, and benefitted early diagnosis of many genetic disorders, which is crucial for early and efficient treatment. Such detection can, in fact, now occur long before birth: from cell‐free DNA testing during the first trimester of pregnancy, to genetic testing of embryos generated by in vitro fertilization, to preconception carrier screening of parents to find out if both are carriers of an autosomal recessive condition. While such prenatal testing of foetuses or embryos primarily focuses on diseases caused by chromosomal abnormalities, technological advances allow also for the testing of an increasing number of heritable monogenic conditions in cases where the disease‐causing variants are known.The medical benefits of such screening are obvious: I personally have lost two pregnancies, one to Turner''s syndrome and the other to an extremely rare and lethal autosomal recessive skeletal dysplasia, and I know first‐hand the heartbreak and devastation involved in finding out that you will lose the child you already love so much. It should be noted though that, very rarely, Turner syndrome is survivable and the long‐term outlook is typically good in those cases (GARD, 2021). In addition, I have Kallmann syndrome, a highly genetically complex dominant endocrine disorder (Maoine et al, 2018), and early detection and treatment make a difference in outcome. Being able to screen early during pregnancy or childhood therefore has significant benefits for affected children. Many other genetic disorders similarly benefit from prenatal screening and detection.But there is also obvious cause for concern: the concept of “designer babies” selected for sex, physical features, or other apparent benefits is well entrenched in our society – and indeed culture – as a product from a dystopian future. Just as a recent example, Philipp Ball, writing for the Guardian in 2017, described designer babies as “an ethical horror waiting to happen” (Ball, 2017). In addition, various commercial enterprises hope to capitalize on these screening technologies. Orchid Inc claims that their preconception screening allows you to “… safely and naturally, protect your baby from diseases that run in your family”. The fact that this is hugely problematic if not impossible from a technological perspective has already been extensively clarified by Lior Pachter, a computational biologist at Caltech (Pachter, 2021). George Church at Harvard University suggested creating a DNA‐based dating app that would effectively prevent people who are both carriers for certain genetic conditions from matching (Flynn, 2019). Richard Dawkins at Oxford University recently commented that “…the decision to deliberately give birth to a Down [syndrome] baby, when you have the choice to abort it early in the pregnancy, might actually be immoral from the point of view of the child’s own welfare” (Dawkins, 2021).These are just a few examples, and as screening technology becomes cheaper, more companies will jump on the bandwagon of perfect “healthy” babies. Conversely, this creates a risk that parents come under pressure to terminate pregnancies with “imperfect babies” as I have experienced myself. What does this mean for people with rare diseases? From my personal moral perspective, the ethics are clear in cases where the pregnancy is clearly not viable. Yet, there are literally thousands of monogenic conditions and even chromosomal abnormalities, not all of which are lethal, and we are making constant strides in treating conditions that were previously considered untreatable. In addition, there is still societal prejudice against people with genetic disorders, and ignorance about how it is to live with a rare disease. In reality, however, all rare disease patients I have encountered are happy to be alive and here, even those whose conditions have significant impact on their quality of life. Many of us also don''t like the term “disorder” or “syndrome”, as we are so much more than merely a disorder or a syndrome.Unfortunately, I also see many parents panic about the results of prenatal testing. Without adequate genetic counselling, they do not understand that their baby’s condition may have actually a quite good prognosis without major impact on the quality of life. Following from this, a mere diagnosis of a rare disease – many of which would not even necessarily have been detectable until later in life, if at all – can be enough to make parents consider termination, due to social stigma.This of course raises the thorny issue of regulation, which range from the USA where there is little to no regulation of such screening technologies (ACOG, 2020), to Sweden where such screening technologies are banned with the exception of specific high‐risk/lethal medical conditions both parents are known carriers for (SMER, 2021). As countries come to grips with both the potential and the risks involved in new screening technologies, medical ethics board have approached this issue. And as screening technologies advance, we will need to ask ourselves difficult questions as a society. I know that in the world of “perfect babies” that some of these companies and individuals are trying to promote, I would not exist, nor would my daughter. I have never before had to find myself so often explaining to people that our lives have value, and I do not want to continue having to do so. Like other forms of diversity, genetic diversity is important and makes us richer as a society. As these screening technologies quickly advance and become more widely available, regulation should at least guarantee that screening must involve proper genetic counselling from a trained clinical geneticist so that parents actually understand the implications of the test results. More urgently, we need to address the problem of societal attitudes towards rare diseases, face the prejudice and fear towards patients, and understand that abolishing genetic diversity in a quest for perfect babies would impoverish humanity and make the world a much poorer place.  相似文献   

20.

In “Structural basis of transport and inhibition of the Plasmodium falciparum transporter PfFNT” by Lyu et al (2021), the authors depict the inhibitor MMV007839 in its hemiketal form in Fig 3A and F, Fig 4C, and Appendix Figs S10A, B and S13. We note that Golldack et al (2017) reported that the linear vinylogous acid tautomer of MMV007839 constitutes the binding and inhibitory entity of PfFNT. The authors are currently obtaining higher resolution cryo‐EM structural data of MMV007839‐bound PfFNT to ascertain which of the interconvertible isoforms is bound and the paper will be updated accordingly.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号