首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 750 毫秒
1.
A survey of academics in Germany shows a lack of and a great demand for training in leadership skills. Subject Categories: Careers, Science Policy & Publishing

Success and productivity in science is measured largely by the number of publications in scientific journals and the acquisition of third‐party funding to finance further research (Detsky, 2011). Consequently, as young researchers advance in their careers, they become highly trained in directly related skills, such as scientific writing, so as to increase their chances in securing publications and grants. Acquiring leadership skills, however, is often neglected as these do not contribute to the evaluation of scientific success (Detsky, 2011). Therefore, an early‐career researcher may become leader of a research group based on publication record and solicitation of third‐party funding, but without any training of leadership or team management skills (Lashuel, 2020). Leadership, in the context of academic research, requires a unique list of competencies, knowledge and skills in addition to “traditional” leadership skills (Anthony & Antony, 2017), such as managing change, adaptability, empathy, motivating individuals, and setting direction and vision among others. Academic leadership also requires promoting the research group’s reputation, networking, protecting staff autonomy, promoting academic credibility, and managing complexity (Anthony & Antony, 2017).  相似文献   

2.
Research needs a balance of risk‐taking in “breakthrough projects” and gradual progress. For building a sustainable knowledge base, it is indispensable to provide support for both. Subject Categories: Careers, Economics, Law & Politics, Science Policy & Publishing

Science is about venturing into the unknown to find unexpected insights and establish new knowledge. Increasingly, academic institutions and funding agencies such as the European Research Council (ERC) explicitly encourage and support scientists to foster risky and hopefully ground‐breaking research. Such incentives are important and have been greatly appreciated by the scientific community. However, the success of the ERC has had its downsides, as other actors in the funding ecosystem have adopted the ERC’s focus on “breakthrough science” and respective notions of scientific excellence. We argue that these tendencies are concerning since disruptive breakthrough innovation is not the only form of innovation in research. While continuous, gradual innovation is often taken for granted, it could become endangered in a research and funding ecosystem that places ever higher value on breakthrough science. This is problematic since, paradoxically, breakthrough potential in science builds on gradual innovation. If the value of gradual innovation is not better recognized, the potential for breakthrough innovation may well be stifled.
While continuous, gradual innovation is often taken for granted, it could become endangered in a research and funding ecosystem that places ever higher value on breakthrough science.
Concerns that the hypercompetitive dynamics of the current scientific system may impede rather than spur innovative research have been voiced for many years (Alberts et al, 2014). As performance indicators continue to play a central role for promotions and grants, researchers are under pressure to publish extensively, quickly, and preferably in high‐ranking journals (Burrows, 2012). These dynamics increase the risk of mental health issues among scientists (Jaremka et al, 2020), dis‐incentivise relevant and important work (Benedictus et al, 2016), decrease the quality of scientific papers (Sarewitz, 2016) and induce conservative and short‐term thinking rather than risk‐taking and original thinking required for scientific innovation (Alberts et al, 2014; Fochler et al, 2016). Against this background, strong incentives for fostering innovative and daring research are indispensable.  相似文献   

3.
Lazy hazy days     
Scientists have warned about the looming climate crisis for decades, but the world has been slow to act. Are we in danger of making a similar mistake, by neglecting the dangers of other climactic catastrophes? Subject Categories: Biotechnology & Synthetic Biology, Economics, Law & Politics, Evolution & Ecology

On one of my trips to Antarctica, I was enjoined to refer not to “global warming” or even to “climate change.” The former implies a uniform and rather benign process, while the second suggests just a transition from one state to another and seems to minimize all the attendant risks to survival. Neither of these terms adequately or accurately describes what is happening to our planet''s climate system as a result of greenhouse gas emissions; not to mention the effects of urbanization, intensive agriculture, deforestation, and other consequences of human population growth. Instead, I was encouraged to use the term “climate disruption,” which embraces the multiplicity of events taking place, some of them still hard to model, that are altering the planetary ecosystem in dramatic ways.With climate disruption now an urgent and undeniable reality, policymakers are finally waking up to the threats that scientists have been warning about for decades. They have accepted the need for action (UNFCCC Conference of the Parties, 2021), even if the commitment remains patchy or lukewarm. But to implement all the necessary changes is a massive undertaking, and it is debatable whether we have enough time left. The fault lies mostly with those who resisted change for so long, hoping the problem would just go away, or denying that it was happening at all. The crisis situation that we face today is because the changes needed simply cannot be executed overnight. It will take time for the infrastructure to be put in place, whether for renewable electricity, for the switch to carbon‐neutral fuels, for sustainable agriculture and construction, and for net carbon capture. If the problems worsen, requiring even more drastic action, at least we do have a direction of travel, though we would be starting off from an even more precarious situation.However, given the time that it has taken—and will still take—to turn around the juggernaut of our industrial society, are we in danger of making the same mistakes all over again, by ignoring the risks of the very opposite process happening in our lifetime? The causes of historic climate cooling are still debated, and though we have fairly convincing evidence regarding specific, sudden events, there is no firm consensus on what is behind longer‐term and possibly cyclical changes in the climate.The two best‐documented examples are the catastrophe of 536–540 AD and the effects of the Laki Haze of 1783–1784. The cause of the 536–540 event is still debated, but is widely believed to have been one or more massive volcanic eruptions that created a global atmospheric dust‐cloud, resulting in a temperature drop of up to 2°C with concomitant famines and societal crises (Toohey et al, 2016; Helama et al, 2018). The Laki Haze was caused by the massive outpouring of sulfurous fumes from the Laki eruption in Iceland. Its effects on the climate, though just as immediate, were less straightforward. The emissions, combined with other meteorological anomalies, produced a disruption of the jetstream, as well as other localized effects. In northwest Europe, the first half of the summer of 1783 was exceptionally hot, but the following winters were dramatically cold, and the mean temperature across much of the northern hemisphere is estimated to have dropped by around 1.3°C for 2–3 years (Thordarson & Self, 2003). In Iceland itself, as well as much of western and northern Europe, the effects were even more devastating, with widespread crop failures and deaths of both livestock and humans exacerbated by the toxicity of the volcanic gases (Schmidt et al, 2011).Other volcanic events in recorded time have produced major climactic disturbances, such as the 1816 Tambora eruption in Indonesia, which resulted in “the year without a summer,” marked by temperature anomalies of up to 4°C (Fasullo et al, 2017), again precipitating worldwide famine. The 1883 Krakatoa eruption produced similar disruption, albeit of a lesser magnitude, though the effects are proposed to have been much longer lasting (Gleckler et al, 2006).Much more scientifically challenging is the so‐called Little Ice Age in the Middle Ages, approximately from 1250 to 1700 AD, when global temperatures were significantly lower than in the preceding and following centuries. It was marked by particularly frigid and prolonged winters in the northern hemisphere. There is no strong consensus as to its cause(s) or even its exact dates; nor even that it can be considered a global‐scale event rather than a summation of several localized phenomena. A volcanic eruption in 1257 with similar effects to the one of 1816 has been suggested as an initiating event. Disruption of the oceanic circulation system resulting from prolonged anomalies in solar activity is another possible explanation (Lapointe & Bradley, 2021). Nevertheless, and despite an average global cooling of < 1°C, the effects on global agriculture, settlement, migration and trade, pandemics such as the Black Death and perhaps even wars and revolutions, were profound.Once or twice in the past century, we have faced devastating wars, tsunamis and pandemics that seemed to come out of the blue and exacted massive tolls on humanity. From the most recent of each of these, there is a growing realization that, although these events are rare and poorly predictable, we can greatly limit the damage if we prepare properly. Devoting a small proportion of our resources over time, we can build the infrastructure and the mechanisms to cope, when these disasters do eventually strike.Without abandoning any of the emergency measures to combat anthropogenic warming, I believe that the risk of climate cooling needs to be addressed in the same way. The infrastructure for burning fossil fuels needs to be mothballed, not destroyed. Carbon capture needs to be implemented in a way that is rapidly reversible, if this should ever be needed. Alternative transportation routes need to be planned and built in case existing ones become impassable due to ice or flooding. Properly insulated buildings are not just a way of saving energy. They are essential for survival in extreme cold, as those of us who live in the Arctic countries are well aware—but many other regions also experience severe winters, for which we should all prepare.Biotechnology needs to be set to work to devise ways of mitigating the effects of sudden climactic events such as the Laki Haze or the Tambora and Krakatoa eruptions, as well as longer‐term phenomena like the Little Ice Age. Could bacteria be used, for example, to detoxify and dissipate a sulfuric aerosol such as the one generated by the Laki eruption? Methane is generally regarded as a major contributor to the greenhouse effect, but it is short‐lived in the atmosphere. So, could methanogens somehow be harnessed to bring about a temporary rise in global temperatures to offset short‐term cooling effects of a volcanic dust‐cloud?We already have a global seed bank in Svalbard (Asdal & Guarino, 2018): It might easily be expanded to include a greater representation of cold‐resistant varieties of the world''s crop plants that might one day be vital to human survival. And, the experience of the Laki Haze indicates a need for varieties capable of withstanding acid rains and other volcanic pollutants, as well as drought and water saturation. An equivalent (embryo) bank for strains of agriculturally important animals potentially threatened by the effects of abrupt cooling of the climate or catastrophic toxification of the atmosphere is also worth considering.It has generally been thought impractical and pointless to prepare for even rarer events, such as cometary impacts, but events that have occurred repeatedly in recorded history and over an even longer time scale (Helama et al, 2021) are likely to happen again. We should and can be better prepared. This is not to say that we should pay attention to every conspiracy theorist or crank, or paid advocates for energy corporations that seek short‐term profits at the expense of long‐term survival, but the dangers of climate disruption of all kinds are too great to ignore. Instead of our current rather one‐dimensional thinking, we need an “all‐risks” approach to the subject: learning from the past and the present to prepare for the future.  相似文献   

4.
5.
6.
7.
Even if the predominant model of science communication with the public is now based on dialogue, many experts still adhere to the outdated deficit model of informing the public. Subject Categories: Genetics, Gene Therapy & Genetic Disease, S&S: History & Philosophy of Science, S&S: Ethics

During the past decades, public communication of science has undergone profound changes: from policy‐driven to policy‐informing, from promoting science to interpreting science, and from dissemination to interaction (Burgess, 2014). These shifts in communication paradigms have an impact on what is expected from scientists who engage in public communication: they should be seen as fellow citizens rather than experts whose task is to increase scientific literacy of the lay public. Many scientists engage in science communication, because they see this as their responsibility toward society (Loroño‐Leturiondo & Davies, 2018). Yet, a significant proportion of researchers still “view public engagement as an activity of talking to rather than with the public” (Hamlyn et al, 2015). The highly criticized “deficit model” that sees the role of experts as educating the public to mitigate skepticism still persists (Simis et al, 2016; Suldovsky, 2016).Indeed, a survey we conducted among experts in training seems to corroborate the persistence of the deficit model even among younger scientists. Based on these results and our own experience with organizing public dialogues about human germline gene editing (Box 1), we discuss the implications of this outdated science communication model and an alternative model of public engagement, that aims to align science with the needs and values of the public.Box 1

The DNA‐dialogue project

The Dutch DNA‐dialogue project invited citizens to discuss and form opinions about human germline gene editing. During 2019 and 2020, this project organized twenty‐seven dialogues with professionals, such as embryologists and midwives, and various lay audiences. Different scenarios of a world in 2039 (https://www.rathenau.nl/en/making‐perfect‐lives/discussing‐modification‐heritable‐dna‐embryos) served as the starting point. Participants expressed their initial reactions to these scenarios with emotion‐cards and thereby explored the values they themselves and other participants deemed important as they elaborated further. Starting each dialogue in this way provides a context that enables everyone to participate in dialogue about complex topics such as human germline gene editing and demonstrates that scientific knowledge should not be a prerequisite to participate.An important example of “different” relevant knowledge surfaced during a dialogue with children between 8 and 12 years in the Sophia Children’s Hospital in Rotterdam (Fig 1). Most adults in the DNA‐dialogues accepted human germline gene modification for severe genetic diseases, as they wished the best possible care and outcome for their children. The children at Sophia, however, stated that they would find it terrible if their parents had altered something about them before they had been born; their parents would not even have known them. Some children went so far to say they would no longer be themselves without their genetic condition, and that their condition had also given them experiences they would rather not have missed.Open in a separate windowFigure 1 Children participating in a DNA‐dialogue meeting. Photographed by Levien Willemse.  相似文献   

8.
Open Science calls for transparent science and involvement of various stakeholders. Here are examples of and advice for meaningful stakeholder engagement. Subject Categories: Economics, Law & Politics, History & Philosophy of Science

The concepts of Open Science and Responsible Research and Innovation call for a more transparent and collaborative science, and more participation of citizens. The way to achieve this is through cooperation with different actors or “stakeholders”: individuals or organizations who can contribute to, or benefit from research, regardless of whether they are researchers themselves or not. Examples include funding agencies, citizens associations, patients, and policy makers (https://aquas.gencat.cat/web/.content/minisite/aquas/publicacions/2018/how_measure_engagement_research_saris1_aquas2018.pdf). Such cooperation is even more relevant in the current, challenging times—even apart from a global pandemic—when pseudo‐science, fake news, nihilist attitudes, and ideologies too often threaten social and technological progress enabled by science. Stakeholder engagement in research can inform and empower citizens, help render research more socially acceptable, and enable policies grounded on evidence‐based knowledge. Beyond, stakeholder engagement is also beneficial to researchers and to research itself. In a recent survey, the majority of scientists reported benefits from public engagement (Burns et al, 2021). This can include increased mutual trust and mutual learning, improved social relevance of research, and improved adoption of results and knowledge (Cottrell et al, 2014). Finally, stakeholder engagement is often regarded as an important factor to sustain public investment in the life sciences (Burns et al, 2021).
Stakeholder engagement in research can inform and empower citizens, help render research more socially acceptable and enable policies grounded on evidence‐based knowledge
Here, we discuss different levels of stakeholder engagement by way of example, presenting various activities organized by European research institutions. Based on these experiences, we propose ten reflection points that we believe should be considered by the institutions, the scientists, and the funding agencies to achieve meaningful and impactful stakeholder engagement.  相似文献   

9.
Synthetic biology could harness the ability of microorganisms to use highly toxic cyanide compounds for growth applied to bioremediation of cyanide‐contaminated mining wastes and areas. Subject Categories: Biotechnology & Synthetic Biology, Evolution & Ecology, Metabolism

Cyanide is a highly toxic chemical produced in large amounts by the mining and jewellery industries, steel manufacturing, coal coking, food processing and chemical synthesis (Luque‐Almagro et al, 2011). The mining industry uses so‐called cyanide leaching to extract gold and other precious metals from ores, which leaves large amounts of cyanide‐containing liquid wastes with arsenic, mercury, lead, copper, zinc and sulphuric acid as cocontaminants.Although these techniques are very efficient, they still produce about one million tonnes of toxic wastewaters each year, which are usually stored in artificial ponds that are prone to leaching or dam breaks and pose a major threat to the environment and human health (Luque‐Almagro et al, 2016). In 2000, a dam burst in Baia Mare, Romania, caused one of the worst environmental disasters in Europe. Liquid waste from a gold mining operation containing about 100 tonnes of cyanide spilled into the Somes River and eventually reached the Danube, killing up to 80% of wildlife in the affected areas. A more recent spill was caused by a blast furnace at Burns Harbor, IN, USA, which released 2,400 kg of ammonia and 260 kg of cyanide at concentrations more than 1,000 times over the legal limit into Calumet River and Lake Michigan, severely affecting wildlife. Notwithstanding the enormous damage such major spills cause, industrial activities that continuously release small amounts of waste are similarly dangerous for human and environmental health.The European Parliament, as part of its General Union Environment Action Programme, has called for a ban on cyanide in mining activities to protect water resources and ecosystems against pollution. Although several EU member states have joined this initiative, there is still no binding legislation. Similarly, there are no general laws in the USA to prevent cyanide spills, and former administration even authorized the use of cyanide for control predators in agriculture.  相似文献   

10.
11.

In “Structural basis of transport and inhibition of the Plasmodium falciparum transporter PfFNT” by Lyu et al (2021), the authors depict the inhibitor MMV007839 in its hemiketal form in Fig 3A and F, Fig 4C, and Appendix Figs S10A, B and S13. We note that Golldack et al (2017) reported that the linear vinylogous acid tautomer of MMV007839 constitutes the binding and inhibitory entity of PfFNT. The authors are currently obtaining higher resolution cryo‐EM structural data of MMV007839‐bound PfFNT to ascertain which of the interconvertible isoforms is bound and the paper will be updated accordingly.  相似文献   

12.

Correction to: The EMBO Journal (2021) 40: e107786. DOI 10.15252/embj.2021107786 | Published online 8 June 2021The authors would like to add three references to the paper: Starr et al and Zahradník et al also reported that the Q498H or Q498R mutation has enhanced binding affinity to ACE2; and Liu et al reported on the binding of bat coronavirus to ACE2.Starr et al and Zahradník et al have now been cited in the Discussion section, and the following sentence has been corrected from:“According to our data, the SARS‐CoV‐2 RBD with Q498H increases the binding strength to hACE2 by 5‐fold, suggesting the Q498H mutant is more ready to interact with human receptor than the wildtype and highlighting the necessity for more strict control of virus and virus‐infected animals”.to“Here, according to our data and two recently published papers, the SARS‐CoV‐2 RBD with Q498H or Q498R increases the binding strength to hACE2 (Starr et al, 2020; Zahradník et al, 2021), suggesting the mutant with Q498H or Q498R is more ready to interact with human receptor than the wild type and highlighting the necessity for more strict control of virus and virus‐infected animals”.The Liu et al citation has been added to the following sentence:“In another paper published by our group recently, RaTG13 RBD was found to bind to hACE2 with much lower binding affinity than SARS‐CoV‐2 though RaTG13 displays the highest whole‐genome sequence identity (96.2%) with the SARS‐CoV‐2 (Liu et al, 2021)”.Additionally, the authors have added the GISAID accession IDs to the sequence names of the SARS‐CoV‐2 in two human samples (Discussion section). To make identification unambiguous, the sequence names have been updated from “SA‐lsf‐27 and SA‐lsf‐37” to “GISAID accession ID: EPI_ISL_672581 and EPI_ISL_672589”.Lastly, the authors declare in the Materials and Methods section that all experiments employed SARS‐CoV‐2 pseudovirus in cultured cells. These experiments were performed in a BSL‐2‐level laboratory and approved by Science and Technology Conditions Platform Office, Institute of Microbiology, Chinese Academy of Sciences.These changes are herewith incorporated into the paper.  相似文献   

13.
The COVID‐19 pandemic has triggered a new bout of anti‐vaccination propaganda. These are often grounded in pseudoscience and misinterpretation of evolutionary biology. Subject Categories: Economics, Law & Politics, Microbiology, Virology & Host Pathogen Interaction, Science Policy & Publishing

Towards the end of summer of 2021, there seemed cause for cautious optimism for putting this pandemic behind us. It was clear that the route of viral transmission was airborne and not via surfaces (Goldman, 2021a), which means that masks are very efficient at reducing the spread of SARS‐CoV‐2. The number of cases in the United States and Europe were declining, and the first vaccines became available with many people lining up to get their jabs. But not all. A significant portion of the population have been refusing to get vaccinated, some of whom were fooled or encouraged by pseudoscientific misinformation propagated on the Internet.  相似文献   

14.
Lessons from implementing quality control systems in an academic research consortium to improve Good Scientific Practice and reproducibility. Subject Categories: Microbiology, Virology & Host Pathogen Interaction, Science Policy & Publishing

Low reproducibility rates within biomedical research negatively impact productivity and translation. One promising approach to enhance the transfer of robust results from preclinical research into clinically relevant and transferable data is the systematic implementation of quality measures in daily laboratory routines.
Although many universities expect their scientists to adhere to GSPs, they often neither systematically support, nor monitor the quality of their research activities.
Today''s fast‐evolving research environment needs effective quality measures to ensure reproducibility and data integrity (Macleod et al, 2014; Begley et al, 2015; Begley & Ioannidis, 2015; Baker, 2016). Academic research institutions and laboratories may be as committed to good scientific practices (GSPs) as their counterparts in the biotech and pharmaceutical industry but operate largely without clearly defined standards (Bespalov et al, 2021; Emmerich et al, 2021). Although many universities expect their scientists to adhere to GSPs, they often neither systematically support, nor monitor the quality of their research activities. Peer review of publications is still regarded as the primary validation of quality control in academic research. However, reviewers only assess work after it has been performed—often over years—and interventions in the experimental process are thus no longer possible.The reasons for the lack of dedicated quality management (QM) implementations in academic laboratories include an anticipated overload of regulatory tasks that could negatively affect productivity, concerns about the loss of scientific freedom, and importantly, limited resources in academia and academic funding schemes.  相似文献   

15.
Commercial screening services for inheritable diseases raise concerns about pressure on parents to terminate “imperfect babies”. Subject Categories: S&S: Economics & Business, Molecular Biology of Disease

Nearly two decades have passed since the first draft sequences of the human genome were published at the eyewatering cost of nearly US$3 billion for the publicly funded project. Sequencing costs have dropped drastically since, and a range of direct‐to‐consumer genetics companies now offer partial sequencing of your individual genome in the US$100 price range, and whole‐genome sequencing for less than US$1,000.While such tests are mainly for personal peruse, there have also been substantial drops in price in clinical genome sequencing, which has greatly enabled the study of and screening for inheritable disorders. This has both advanced our understanding of these diseases in general, and benefitted early diagnosis of many genetic disorders, which is crucial for early and efficient treatment. Such detection can, in fact, now occur long before birth: from cell‐free DNA testing during the first trimester of pregnancy, to genetic testing of embryos generated by in vitro fertilization, to preconception carrier screening of parents to find out if both are carriers of an autosomal recessive condition. While such prenatal testing of foetuses or embryos primarily focuses on diseases caused by chromosomal abnormalities, technological advances allow also for the testing of an increasing number of heritable monogenic conditions in cases where the disease‐causing variants are known.The medical benefits of such screening are obvious: I personally have lost two pregnancies, one to Turner''s syndrome and the other to an extremely rare and lethal autosomal recessive skeletal dysplasia, and I know first‐hand the heartbreak and devastation involved in finding out that you will lose the child you already love so much. It should be noted though that, very rarely, Turner syndrome is survivable and the long‐term outlook is typically good in those cases (GARD, 2021). In addition, I have Kallmann syndrome, a highly genetically complex dominant endocrine disorder (Maoine et al, 2018), and early detection and treatment make a difference in outcome. Being able to screen early during pregnancy or childhood therefore has significant benefits for affected children. Many other genetic disorders similarly benefit from prenatal screening and detection.But there is also obvious cause for concern: the concept of “designer babies” selected for sex, physical features, or other apparent benefits is well entrenched in our society – and indeed culture – as a product from a dystopian future. Just as a recent example, Philipp Ball, writing for the Guardian in 2017, described designer babies as “an ethical horror waiting to happen” (Ball, 2017). In addition, various commercial enterprises hope to capitalize on these screening technologies. Orchid Inc claims that their preconception screening allows you to “… safely and naturally, protect your baby from diseases that run in your family”. The fact that this is hugely problematic if not impossible from a technological perspective has already been extensively clarified by Lior Pachter, a computational biologist at Caltech (Pachter, 2021). George Church at Harvard University suggested creating a DNA‐based dating app that would effectively prevent people who are both carriers for certain genetic conditions from matching (Flynn, 2019). Richard Dawkins at Oxford University recently commented that “…the decision to deliberately give birth to a Down [syndrome] baby, when you have the choice to abort it early in the pregnancy, might actually be immoral from the point of view of the child’s own welfare” (Dawkins, 2021).These are just a few examples, and as screening technology becomes cheaper, more companies will jump on the bandwagon of perfect “healthy” babies. Conversely, this creates a risk that parents come under pressure to terminate pregnancies with “imperfect babies” as I have experienced myself. What does this mean for people with rare diseases? From my personal moral perspective, the ethics are clear in cases where the pregnancy is clearly not viable. Yet, there are literally thousands of monogenic conditions and even chromosomal abnormalities, not all of which are lethal, and we are making constant strides in treating conditions that were previously considered untreatable. In addition, there is still societal prejudice against people with genetic disorders, and ignorance about how it is to live with a rare disease. In reality, however, all rare disease patients I have encountered are happy to be alive and here, even those whose conditions have significant impact on their quality of life. Many of us also don''t like the term “disorder” or “syndrome”, as we are so much more than merely a disorder or a syndrome.Unfortunately, I also see many parents panic about the results of prenatal testing. Without adequate genetic counselling, they do not understand that their baby’s condition may have actually a quite good prognosis without major impact on the quality of life. Following from this, a mere diagnosis of a rare disease – many of which would not even necessarily have been detectable until later in life, if at all – can be enough to make parents consider termination, due to social stigma.This of course raises the thorny issue of regulation, which range from the USA where there is little to no regulation of such screening technologies (ACOG, 2020), to Sweden where such screening technologies are banned with the exception of specific high‐risk/lethal medical conditions both parents are known carriers for (SMER, 2021). As countries come to grips with both the potential and the risks involved in new screening technologies, medical ethics board have approached this issue. And as screening technologies advance, we will need to ask ourselves difficult questions as a society. I know that in the world of “perfect babies” that some of these companies and individuals are trying to promote, I would not exist, nor would my daughter. I have never before had to find myself so often explaining to people that our lives have value, and I do not want to continue having to do so. Like other forms of diversity, genetic diversity is important and makes us richer as a society. As these screening technologies quickly advance and become more widely available, regulation should at least guarantee that screening must involve proper genetic counselling from a trained clinical geneticist so that parents actually understand the implications of the test results. More urgently, we need to address the problem of societal attitudes towards rare diseases, face the prejudice and fear towards patients, and understand that abolishing genetic diversity in a quest for perfect babies would impoverish humanity and make the world a much poorer place.  相似文献   

16.
Similar to persister bacterial cells that survive antibiotic treatments, some cancer cells can evade drug treatments. This Commentary discusses the different classes of persister cells and their implications for developing more efficient cancer treatments. Subject Categories: Cancer

Similar to persister bacterial cells that survive antibiotic treatments, small populations of cancer cells can evade drug treatments and cause recurrent disease. This Commentary discusses the different classes of persister cells and their implications for developing more efficient cancer treatments.In 1944, Joseph Bigger, a lieutenant‐colonel in the British Royal Army Medical Corps, reported a peculiar population of bacteria that could survive very high concentrations of penicillin (Bigger, 1944). He termed these hard‐to‐kill cells “persisters” and argued they might explain the limited success of penicillin in curing infections. At the time, 16 years after antibiotics revolutionized bacterial infection treatment, this was a groundbreaking hypothesis as it was largely believed that partial killing was mostly due to inadequate blood supply or tissue barriers. Later on, the understanding that cell‐intrinsic properties may contribute to transient drug tolerance sparked research aimed at targeting microbial persister cells. In a seminal paper, Sherma and colleagues (Sharma et al, 2010) showed that reversible cell‐intrinsic resistance can also be observed in cancer cells in response to therapy. Similar to bacterial persisters, these cancer persister cells gave rise to a drug‐sensitive cell progeny following a short “drug‐holiday” and did not harbor any known resistance‐mediating alteration mutation. However, in contrast to microbial persisters that are largely dormant, a small fraction of cancer persister cells were able to resume proliferation even under continued drug treatment. Understanding the similarities and differences between cancer and microbial persister cells is pivotal to devise approaches to eliminate them (Fig 1).Open in a separate windowFigure 1Different persister classes(A) Classic persisters, (B) targeted‐persisters, and (C) immune‐persisters. The mechanism of escape is dependent on the mode of action of the drug. While classical persisters are common to both bacteria and cancer cells, other persister classes are cancer‐specific and are associated with the ability of cancer cells to probe a wide range of cells states and lineage trajectories.So why can some bacteria persist in the face of therapy? The answer largely lays in the mode of action of antimicrobial drugs. Penicillin and newer generation antibiotics target bacterial cell division. As such, if the bacteria are dormant or reside in a low metabolic state, they are unafflicted by the drug. Dormant bacteria are frequently resistant to multiple stressors and drugs making them difficult to eradicate even with a very aggressive treatment. Unsurprisingly, similar phenomena are observed in the context of chemotherapy treatments in cancer. Like antibiotics, early cancer therapies were largely based on drugs that target highly proliferative cells. Sustained proliferation in the absence of external stimuli is one of the hallmarks of cancer. Because cancer cells divide more frequently than most normal cells, they are more likely to be killed by chemotherapy treatment. As both antibiotics and chemotherapy treatments target proliferating cells, it is not surprising that cell dormancy was linked to cell persistence in both cases. “Classical” nondividing persister cells have been implicated in treatment failure both in cancer and in microbial infections and are thought to provide a reservoir for subsequent relapse events.In the last 20 years, a new class of cancer drugs, called targeted therapies, have emerged and revolutionized patient care. Unlike chemotherapies or antibiotics, these drugs do not target proliferating cell per se but rather act on specific molecular targets associated with cancer. For example, some targeted therapies target proteins that are more abundant on the surface of cancer cells compared with that of normal cells. While slow proliferation has also been implicated in tolerance in the context of targeted therapy, multiple additional mechanisms are at play, which are not characteristic of microbial persister cells. For instance, oncogene‐targeted therapies are taking advantage of the acquired dependence of a cancer cell on the activity of a single oncogenic gene product. As many oncogenes control cell metabolism (Levine & Puzio‐Kuter, 2010), for example by regulating glucose uptake, drugs that target oncogene addiction can have profound effects on metabolism. In line with this, oncogenic‐persisters, for example, persisters that escape killing by oncogene‐targeted therapies show higher levels of fatty acid oxidation (Oren et al, 2021). This shift away from the “Warburg” glycolytic state into a more mitochondrially active energy production state, which resembles non‐transformed cells, might indicate the release from oncogenic addiction. Importantly, this shift does not lead to overall lower metabolic activity and in some cases might even allow persisters that were arrested to resume cell cycle in the presence of a drug. This high modularity is possible in cancer cells as they can, under certain conditions, tap into a vast space of cellular states that reflect different tissues and developmental trajectories. Cancer persister cell plasticity is perhaps best exemplified by phenotypic transformation from non‐small‐cell lung adenocarcinoma to small‐cell lung cancer upon prolonged treatment with EGFR inhibitors (Shaurova et al, 2020). Such lineage switching accounts for up to 14% of acquired resistance to EGFR‐targeted therapy. Clinical data of relapsed patients strongly support the hypothesis that this transformation happens via persister cells that were able to withstand EGFR therapy. Taken together, these observations show that cancer persister cells can circumvent oncogenic withdrawal by adopting alternative cell states. Notably, these changes do not necessarily require any genetic alteration and in theory can be reversible and potentially mediated by microenvironment signaling.The most recent addition to the cancer‐fighting arsenal are immunotherapies designed to boost immune responses. Immune‐persisters, cells that can evade immune response, have been reported in multiple cancer types and are thought to underlie the late relapse frequently observed in patients (Shen et al, 2020). While tumor dormancy might play a role in this context as well, it is interesting to note that immune evasion can be achieved by modulating immune checkpoint molecules without any need to suppress cell proliferation. Furthermore, in the case of CAR T‐cell therapy, a class of immunotherapy that is based on revamped T cells, persistence might be viewed as a dynamic cell‐to‐cell communication process. It was shown that to elicit killing a cancer cell has to have multiple interactions with a T cell (Weigelin et al, 2021). This multihit sequential process that can take more than an hour in vivo may allow cancer cells to modulate the cytotoxic T cell in a way that would favor their persistence. Hence, understanding what underlies T‐cell phenotypes might as be as important as studying the cancer persister cells they are targeting.The holy grail of the persister filed is finding ways to target these drug‐tolerant cells in a manner that would prevent disease recurrence. However, given at least three classes of persisters have been already reported, and more are expected to arise as we continue to expand our therapeutic toolbox, would it even be possible to implement a single approach to eliminate them? Studies that searched for a magic bullet that could eliminate persister cells were largely based on the hope that persister cells would be less heterogenous than the drug‐naïve cell population they were derived from (Cabanos & Hata, 2021; Hangauer et al, 2017). If such convergence on similar cell states exists upon treatment, it simplifies the need to combine multiple drugs to eliminate the entire cell population. Unfortunately, it seems that persister cells can come in multiple forms and that distinct persister phenotypes may coexist in a single tumor. The major drivers of this heterogeneity currently remain unclear and may include tumor lineage, treatment type, or a combination of both. Moreover, it is unknown if the heterogeneity in persister phenotypes can be predicated based on the drug‐naïve population and how these diverse persister fates are associated with clinical outcomes. Understanding persister heterogeneity is critical as the simplistic approach of trying to eliminate as many persister cells as possible, assumes that all cells are equally pathogenic, which might not be the case if only a subset of them are able to contribute to relapse. Furthermore, persister cells might differ in their aptitude to give rise to cells that harbor a resistance‐mediating mutation. Such differences in evolvability must be considered when weighing possible treatments. Answering these questions would be key to devising effective therapeutic approaches to eliminate persister cells. In the last century, the study of microbial persistence had provided important insights into how to fight infections. Hopefully, in the years to come, we will build upon this valuable knowledge foundation and expend it to devise better ways to fight cancer.  相似文献   

17.
18.
Segregation of the largely non‐homologous X and Y sex chromosomes during male meiosis is not a trivial task, because their pairing, synapsis, and crossover formation are restricted to a tiny region of homology, the pseudoautosomal region. In humans, meiotic X‐Y missegregation can lead to 47, XXY offspring, also known as Klinefelter syndrome, but to what extent genetic factors predispose to paternal sex chromosome aneuploidy has remained elusive. In this issue, Liu et al (2021) provide evidence that deleterious mutations in the USP26 gene constitute one such factor.Subject Categories: Cell Cycle, Development & Differentiation, Molecular Biology of Disease

Analyses of Klinefelter syndrome patients and Usp26‐deficient mice have revealed a genetic influence on age‐dependent sex chromosome missegregation during male meiosis.

Multilayered mechanisms have evolved to ensure successful X‐Y recombination, as a prerequisite for subsequent normal chromosome segregation. These include a distinct chromatin structure as well as specialized proteins on the pseudoautosomal region (Kauppi et al, 2011; Acquaviva et al, 2020). Even so, X‐Y recombination fails fairly often, especially in the face of even modest meiotic perturbations. It is perhaps not surprising then that X‐Y aneuploidy—but not autosomal aneuploidy—in sperm increases with age (Lowe et al, 2001; Arnedo et al, 2006), as does the risk of fathering sons with Klinefelter syndrome (De Souza & Morris, 2010).Klinefelter syndrome is one of the most common aneuploidies in liveborn individuals (Thomas & Hassold, 2003). While most human trisomies result from errors in maternal chromosome segregation, this is not the case for Klinefelter syndrome, where the extra X chromosome is equally likely to be of maternal or paternal origin (Thomas & Hassold, 2003; Arnedo et al, 2006). Little is known about genetic factors in humans that predispose to paternal XY aneuploidy, i.e., that increase the risk of fathering Klinefelter syndrome offspring. The general notion has been that paternally derived Klinefelter syndrome arises stochastically. However, fathers of Klinefelter syndrome patients have elevated rates of XY aneuploid sperm (Lowe et al, 2001; Arnedo et al, 2006), implying a persistent defect in spermatogenesis in these individuals rather than a one‐off meiotic error.To identify possible genetic factors contributing to Klinefelter syndrome risk, Liu et al (2021) performed whole‐exome sequencing in a discovery cohort of > 100 Klinefelter syndrome patients, followed by targeted sequencing in a much larger cohort of patients and controls, as well as Klinefelter syndrome family trios. The authors homed in on a mutational cluster (“mutated haplotype”) in ubiquitin‐specific protease 26 (USP26), a testis‐expressed gene located on the X chromosome. Effects of this gene’s loss of function (Usp26‐deficient mice) on spermatogenesis have recently been independently reported by several laboratories and ranged from no detectable fertility phenotype (Felipe‐Medina et al, 2019) to subfertility/sterility associated with both meiotic and spermiogenic defects (Sakai et al, 2019; Tian et al, 2019). With their Klinefelter syndrome cohort findings, Liu et al (2021) also turned to Usp26 null mice, paying particular attention to X‐Y chromosome behavior and—unlike earlier mouse studies—including older mice in their analyses. They found that Usp26‐deficient animals often failed to achieve stable pairing and synapsis of X‐Y chromosomes in spermatocytes, produced XY aneuploid sperm at an abnormally high frequency, and sometimes also sired XXY offspring. Importantly, these phenotypes only occurred at an advanced age: XY aneuploidy was seen in six‐month‐old, but not two‐month‐old Usp26‐deficient males. Moreover, levels of spindle assembly checkpoint (SAC) proteins also reduced in six‐month‐old males. Thus, in older Usp26 null mice, the combination of less efficient X‐Y pairing and less stringent SAC‐mediated surveillance of faithful chromosome segregation allows for sperm aneuploidy, providing another example of SAC leakiness in males (see Lane & Kauppi, 2019 for discussion).Liu et al’s analyses shed some light on what molecular mechanisms may be responsible for the reduced efficiency of X‐Y pairing and synapsis in Usp26‐deficient spermatocytes. USP26 codes for a deubiquitinating enzyme that has several substrates in the testis. Because USP26 prevents degradation of these substrates, their levels should be downregulated in Usp26 null testes. Liu et al (2021) show that USP26 interacts with TEX11, a protein required for stable pairing and normal segregation of the X and Y chromosomes in mouse meiosis (Adelman & Petrini, 2008). USP26 can de‐ubiquitinate TEX11 in vitro, and in Usp26 null testes, TEX11 was almost undetectable. It is worth noting that USP26 has several other known substrates, including the androgen receptor (AR), and therefore, USP26 disruption likely contributes to compromised spermatogenesis via multiple mechanisms. For example, AR signaling‐dependent hormone levels are misregulated in Usp26 null mice (Tian et al, 2019).The sex chromosome phenotypes observed in Usp26 null mice predict that men with USP26 mutations may be fertile, but producing XY aneuploid sperm at an abnormally high frequency, and that spermatogenic defects should increase with age (Fig 1). These predictions were testable, because the mutated USP26 haplotype, present in 13% of Klinefelter syndrome patients, was reasonably common also in fertile men (7–10%). Indeed, sperm XY aneuploidy was substantially higher in fertile men with the mutated USP26 haplotype than in those without USP26 mutations. Some mutation carriers produced > 4% aneuploid sperm. Moreover, age‐dependent oligospermia was also found associated with the mutated USP26 haplotype.Open in a separate windowFigure 1Mutated USP26 as genetic risk factor for age‐dependent X‐Y defects in spermatogenesisMouse genetics demonstrate that deleterious USP26 mutations lead to less‐efficient X‐Y pairing and recombination with advancing age. Concomitant decrease of spindle assembly checkpoint (SAC) protein levels leads to less‐efficient elimination of metaphase I spermatocytes that contain misaligned X and Y chromosomes. This allows for the formation of XY aneuploid sperm in older individuals and subsequently increased age‐dependent risk for fathering Klinefelter syndrome (KS) offspring, two correlates also observed in human USP26 mutation carriers. At the same time, oligospermia/subfertility also increases with advanced age in both Usp26‐deficient mice and USP26 mutation‐carrying men, tempering Klinefelter syndrome offspring risk but also decreasing fecundity.As indicated by its prevalence in the normal control population, the USP26 mutated haplotype is not selected against in the human population. With > 95% of sperm in USP26 mutation carriers having normal haploid chromosomal composition, the risk of producing (infertile) Klinefelter syndrome offspring remains modest, likely explaining why USP26 mutant alleles are not eliminated. Given that full Usp26 disruption barely affects fertility of male mice during their prime reproductive age (Felipe‐Medina et al, 2019; Tian et al, 2019; Liu et al, 2021), there is little reason to assume strong negative selection against USP26 variants in humans. USP26 as the first‐ever genetic risk factor predisposing to sperm X‐Y aneuploidy and paternal origin Klinefelter syndrome offspring in humans, as uncovered by Liu et al, may be just one of many. 90% of Liu et al’s Klinefelter syndrome cases were not associated with USP26 mutations. But even in the age of genomics, discovery of Klinefelter syndrome risk factors is not straightforward, since most sperm of risk mutation carriers will not be XY aneuploid and thus not give rise to Klinefelter syndrome offspring. In addition, as Usp26 null mice demonstrate, both genetic and non‐genetic modifiers impact on penetrance of the XY aneuploidy phenotype: Spermatogenesis in the absence of Usp26 was impaired in the DBA/2 but not the C57BL/6 mouse strain background (Sakai et al, 2019), and in older mice, there was substantial inter‐individual variation in the severity of the X‐Y defect (Liu et al, 2021). In human cohorts, genetic and non‐genetic modifiers are expected to blur the picture even more.Future identification of sex chromosome aneuploidy risk factors has human health implications beyond Klinefelter syndrome. Firstly, XXY incidence is not only relevant for Klinefelter syndrome livebirths—it also contributes to stillbirths and spontaneous abortions, at a 4‐fold higher rate than to livebirths (Thomas & Hassold, 2003). Secondly, persistent meiotic X‐Y defects can, over time, result in oligospermia and even infertility. Since the mean age of first‐time fathers is steadily rising and currently well over 30 years in many Western countries, age‐dependent spermatogenic defects will be of ever‐increasing clinical relevance.  相似文献   

19.
Removing the 14‐day limit for research on human embryos without public deliberation could jeopardize public trust in and support of research on human development. Subject Categories: Development & Differentiation, S&S: Economics & Business, Molecular Biology of Disease

In On Revolution, Hannah Arendt, one of the great political thinkers of the 20th century, stated that “promises and agreements deal with the future and provide stability in the ocean of future uncertainty where the unpredictable may break in from all sides” (Arendt, 1963). She cited the Mayflower Compact, which was “drawn up on the ship and signed upon landing” on the uncharted territory of the American continent, as such an example of promise in Western history. Human beings are born with the capacity to act freely amid the vast ocean of uncertainty, but this capacity also creates unpredictable and irreversible consequences. Thus, in society and in politics, moral virtues can only persist through “making promises and keeping them” (Arendt, 1959).  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号