首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Lessons from implementing quality control systems in an academic research consortium to improve Good Scientific Practice and reproducibility. Subject Categories: Microbiology, Virology & Host Pathogen Interaction, Science Policy & Publishing

Low reproducibility rates within biomedical research negatively impact productivity and translation. One promising approach to enhance the transfer of robust results from preclinical research into clinically relevant and transferable data is the systematic implementation of quality measures in daily laboratory routines.
Although many universities expect their scientists to adhere to GSPs, they often neither systematically support, nor monitor the quality of their research activities.
Today''s fast‐evolving research environment needs effective quality measures to ensure reproducibility and data integrity (Macleod et al, 2014; Begley et al, 2015; Begley & Ioannidis, 2015; Baker, 2016). Academic research institutions and laboratories may be as committed to good scientific practices (GSPs) as their counterparts in the biotech and pharmaceutical industry but operate largely without clearly defined standards (Bespalov et al, 2021; Emmerich et al, 2021). Although many universities expect their scientists to adhere to GSPs, they often neither systematically support, nor monitor the quality of their research activities. Peer review of publications is still regarded as the primary validation of quality control in academic research. However, reviewers only assess work after it has been performed—often over years—and interventions in the experimental process are thus no longer possible.The reasons for the lack of dedicated quality management (QM) implementations in academic laboratories include an anticipated overload of regulatory tasks that could negatively affect productivity, concerns about the loss of scientific freedom, and importantly, limited resources in academia and academic funding schemes.  相似文献   

2.
Conditional Access Agreements could improve replicability of research and enhance Open Science without jeopardizing intellectual property rights. Subject Categories: Economics, Law & Politics, Science Policy & Publishing

Replicability is a cornerstone of the scientific enterprise. Validating published scientific findings enhances their credibility and helps to build a self‐correcting cumulative knowledge base. It also increases public trust in science (Wingen et al2020). Unfortunately, the scientific community has been facing a considerable problem for at least two decades: the replication crisis (Ioannidis, 2005). Scientists in various disciplines have significant difficulties trying to verify published scientific findings (Baker, 2016). One prominent factor accounting for non‐replicability is diminished access to research materials required for replication (replication materials).
Scientists in various disciplines have significant difficulties trying to verify published scientific findings.
This problem is particularly noticeable in computational studies: research that utilizes computational models, often with an immense amount of data. With the rise of powerful computers, machine learning and big data, computational studies are increasingly used in a variety of disciplines. This trend is evident in biology as well, including in systems biology, genomics, proteomics, and other areas (Markowetz, 2017). A famous example that demonstrates the importance of computational biology is the Human Genome Project. Developments in computational biology are crucial in advancing promising research prospects in areas such as vaccine antigen design and structural bioinformatics.
The problem of diminished access to replication materials has been reported as a major stumbling block impeding the replicability of computational biology studies.
A scientific paper alone would not typically enable others to replicate the study described therein (Merali, 2010). Replicating a computational study generally requires access to the code, software documentation, datasets, workflows, and other information regarding the methodology (Easterbrook, 2014). In most cases, however, authors do not publicly share these elements, which renders such studies impossible to replicate (Merali, 2010; Stodden et al, 2018). The problem of diminished access to replication materials has been reported as a major stumbling block impeding the replicability of computational biology studies (Crook et al, 2013; Miłkowski et al, 2018).  相似文献   

3.
Removing the 14‐day limit for research on human embryos without public deliberation could jeopardize public trust in and support of research on human development. Subject Categories: Development & Differentiation, S&S: Economics & Business, Molecular Biology of Disease

In On Revolution, Hannah Arendt, one of the great political thinkers of the 20th century, stated that “promises and agreements deal with the future and provide stability in the ocean of future uncertainty where the unpredictable may break in from all sides” (Arendt, 1963). She cited the Mayflower Compact, which was “drawn up on the ship and signed upon landing” on the uncharted territory of the American continent, as such an example of promise in Western history. Human beings are born with the capacity to act freely amid the vast ocean of uncertainty, but this capacity also creates unpredictable and irreversible consequences. Thus, in society and in politics, moral virtues can only persist through “making promises and keeping them” (Arendt, 1959).  相似文献   

4.
A survey of academics in Germany shows a lack of and a great demand for training in leadership skills. Subject Categories: Careers, Science Policy & Publishing

Success and productivity in science is measured largely by the number of publications in scientific journals and the acquisition of third‐party funding to finance further research (Detsky, 2011). Consequently, as young researchers advance in their careers, they become highly trained in directly related skills, such as scientific writing, so as to increase their chances in securing publications and grants. Acquiring leadership skills, however, is often neglected as these do not contribute to the evaluation of scientific success (Detsky, 2011). Therefore, an early‐career researcher may become leader of a research group based on publication record and solicitation of third‐party funding, but without any training of leadership or team management skills (Lashuel, 2020). Leadership, in the context of academic research, requires a unique list of competencies, knowledge and skills in addition to “traditional” leadership skills (Anthony & Antony, 2017), such as managing change, adaptability, empathy, motivating individuals, and setting direction and vision among others. Academic leadership also requires promoting the research group’s reputation, networking, protecting staff autonomy, promoting academic credibility, and managing complexity (Anthony & Antony, 2017).  相似文献   

5.
Research needs a balance of risk‐taking in “breakthrough projects” and gradual progress. For building a sustainable knowledge base, it is indispensable to provide support for both. Subject Categories: Careers, Economics, Law & Politics, Science Policy & Publishing

Science is about venturing into the unknown to find unexpected insights and establish new knowledge. Increasingly, academic institutions and funding agencies such as the European Research Council (ERC) explicitly encourage and support scientists to foster risky and hopefully ground‐breaking research. Such incentives are important and have been greatly appreciated by the scientific community. However, the success of the ERC has had its downsides, as other actors in the funding ecosystem have adopted the ERC’s focus on “breakthrough science” and respective notions of scientific excellence. We argue that these tendencies are concerning since disruptive breakthrough innovation is not the only form of innovation in research. While continuous, gradual innovation is often taken for granted, it could become endangered in a research and funding ecosystem that places ever higher value on breakthrough science. This is problematic since, paradoxically, breakthrough potential in science builds on gradual innovation. If the value of gradual innovation is not better recognized, the potential for breakthrough innovation may well be stifled.
While continuous, gradual innovation is often taken for granted, it could become endangered in a research and funding ecosystem that places ever higher value on breakthrough science.
Concerns that the hypercompetitive dynamics of the current scientific system may impede rather than spur innovative research have been voiced for many years (Alberts et al, 2014). As performance indicators continue to play a central role for promotions and grants, researchers are under pressure to publish extensively, quickly, and preferably in high‐ranking journals (Burrows, 2012). These dynamics increase the risk of mental health issues among scientists (Jaremka et al, 2020), dis‐incentivise relevant and important work (Benedictus et al, 2016), decrease the quality of scientific papers (Sarewitz, 2016) and induce conservative and short‐term thinking rather than risk‐taking and original thinking required for scientific innovation (Alberts et al, 2014; Fochler et al, 2016). Against this background, strong incentives for fostering innovative and daring research are indispensable.  相似文献   

6.
Academic Core Facilities are optimally situated to improve the quality of preclinical research by implementing quality control measures and offering these to their users. Subject Categories: Methods & Resources, Science Policy & Publishing

During the past decade, the scientific community and outside observers have noted a concerning lack of rigor and transparency in preclinical research that led to talk of a “reproducibility crisis” in the life sciences (Baker, 2016; Bespalov & Steckler, 2018; Heddleston et al, 2021). Various measures have been proposed to address the problem: from better training of scientists to more oversight to expanded publishing practices such as preregistration of studies. The recently published EQIPD (Enhancing Quality in Preclinical Data) System is, to date, the largest initiative that aims to establish a systematic approach for increasing the robustness and reliability of biomedical research (Bespalov et al, 2021). However, promoting a cultural change in research practices warrants a broad adoption of the Quality System and its underlying philosophy. It is here that academic Core Facilities (CF), research service providers at universities and research institutions, can make a difference.It is fair to assume that a significant fraction of published data originated from experiments that were designed, run, or analyzed in CFs. These academic services play an important role in the research ecosystem by offering access to cutting‐edge equipment and by developing and testing novel techniques and methods that impact research in the academic and private sectors alike (Bikovski et al, 2020). Equipment and infrastructure are not the only value: CFs employ competent personnel with profound knowledge and practical experience of the specific field of interest: animal behavior, imaging, crystallography, genomics, and so on. Thus, CFs are optimally positioned to address concerns about the quality and robustness of preclinical research.  相似文献   

7.
Commercial screening services for inheritable diseases raise concerns about pressure on parents to terminate “imperfect babies”. Subject Categories: S&S: Economics & Business, Molecular Biology of Disease

Nearly two decades have passed since the first draft sequences of the human genome were published at the eyewatering cost of nearly US$3 billion for the publicly funded project. Sequencing costs have dropped drastically since, and a range of direct‐to‐consumer genetics companies now offer partial sequencing of your individual genome in the US$100 price range, and whole‐genome sequencing for less than US$1,000.While such tests are mainly for personal peruse, there have also been substantial drops in price in clinical genome sequencing, which has greatly enabled the study of and screening for inheritable disorders. This has both advanced our understanding of these diseases in general, and benefitted early diagnosis of many genetic disorders, which is crucial for early and efficient treatment. Such detection can, in fact, now occur long before birth: from cell‐free DNA testing during the first trimester of pregnancy, to genetic testing of embryos generated by in vitro fertilization, to preconception carrier screening of parents to find out if both are carriers of an autosomal recessive condition. While such prenatal testing of foetuses or embryos primarily focuses on diseases caused by chromosomal abnormalities, technological advances allow also for the testing of an increasing number of heritable monogenic conditions in cases where the disease‐causing variants are known.The medical benefits of such screening are obvious: I personally have lost two pregnancies, one to Turner''s syndrome and the other to an extremely rare and lethal autosomal recessive skeletal dysplasia, and I know first‐hand the heartbreak and devastation involved in finding out that you will lose the child you already love so much. It should be noted though that, very rarely, Turner syndrome is survivable and the long‐term outlook is typically good in those cases (GARD, 2021). In addition, I have Kallmann syndrome, a highly genetically complex dominant endocrine disorder (Maoine et al, 2018), and early detection and treatment make a difference in outcome. Being able to screen early during pregnancy or childhood therefore has significant benefits for affected children. Many other genetic disorders similarly benefit from prenatal screening and detection.But there is also obvious cause for concern: the concept of “designer babies” selected for sex, physical features, or other apparent benefits is well entrenched in our society – and indeed culture – as a product from a dystopian future. Just as a recent example, Philipp Ball, writing for the Guardian in 2017, described designer babies as “an ethical horror waiting to happen” (Ball, 2017). In addition, various commercial enterprises hope to capitalize on these screening technologies. Orchid Inc claims that their preconception screening allows you to “… safely and naturally, protect your baby from diseases that run in your family”. The fact that this is hugely problematic if not impossible from a technological perspective has already been extensively clarified by Lior Pachter, a computational biologist at Caltech (Pachter, 2021). George Church at Harvard University suggested creating a DNA‐based dating app that would effectively prevent people who are both carriers for certain genetic conditions from matching (Flynn, 2019). Richard Dawkins at Oxford University recently commented that “…the decision to deliberately give birth to a Down [syndrome] baby, when you have the choice to abort it early in the pregnancy, might actually be immoral from the point of view of the child’s own welfare” (Dawkins, 2021).These are just a few examples, and as screening technology becomes cheaper, more companies will jump on the bandwagon of perfect “healthy” babies. Conversely, this creates a risk that parents come under pressure to terminate pregnancies with “imperfect babies” as I have experienced myself. What does this mean for people with rare diseases? From my personal moral perspective, the ethics are clear in cases where the pregnancy is clearly not viable. Yet, there are literally thousands of monogenic conditions and even chromosomal abnormalities, not all of which are lethal, and we are making constant strides in treating conditions that were previously considered untreatable. In addition, there is still societal prejudice against people with genetic disorders, and ignorance about how it is to live with a rare disease. In reality, however, all rare disease patients I have encountered are happy to be alive and here, even those whose conditions have significant impact on their quality of life. Many of us also don''t like the term “disorder” or “syndrome”, as we are so much more than merely a disorder or a syndrome.Unfortunately, I also see many parents panic about the results of prenatal testing. Without adequate genetic counselling, they do not understand that their baby’s condition may have actually a quite good prognosis without major impact on the quality of life. Following from this, a mere diagnosis of a rare disease – many of which would not even necessarily have been detectable until later in life, if at all – can be enough to make parents consider termination, due to social stigma.This of course raises the thorny issue of regulation, which range from the USA where there is little to no regulation of such screening technologies (ACOG, 2020), to Sweden where such screening technologies are banned with the exception of specific high‐risk/lethal medical conditions both parents are known carriers for (SMER, 2021). As countries come to grips with both the potential and the risks involved in new screening technologies, medical ethics board have approached this issue. And as screening technologies advance, we will need to ask ourselves difficult questions as a society. I know that in the world of “perfect babies” that some of these companies and individuals are trying to promote, I would not exist, nor would my daughter. I have never before had to find myself so often explaining to people that our lives have value, and I do not want to continue having to do so. Like other forms of diversity, genetic diversity is important and makes us richer as a society. As these screening technologies quickly advance and become more widely available, regulation should at least guarantee that screening must involve proper genetic counselling from a trained clinical geneticist so that parents actually understand the implications of the test results. More urgently, we need to address the problem of societal attitudes towards rare diseases, face the prejudice and fear towards patients, and understand that abolishing genetic diversity in a quest for perfect babies would impoverish humanity and make the world a much poorer place.  相似文献   

8.
9.
Efforts by the EU to improve its regulatory framework for importing GM food and feed have done nothing to make the process easier and more predictable for applicants. Subject Categories: Biotechnology & Synthetic Biology, Economics, Law & Politics, Plant Biology

The first genetically modified (GM) crops were introduced more than two decades ago and have been planted globally on more than 190 million hectares (ISAAA, 2020), a surface area larger than all the arable land in the EU. Thousands of risk assessments have consistently concluded that they are as safe as conventional crops in regard to human and animal health (Smyth et al, 2021) and many countries have been growing GM crops for years. Despite political commitments to innovation and investments into research (EC, 2010), the EU is still lagging behind in adopting this technology on a wider scale owing to diverging views among its member states, the European Commission (EC) and the European parliament. Various attempts to resolve this tension by legal and regulatory means have created the most cumbersome and byzantine regulatory system for GM crops in the world. The Implementing Regulation (EU) No 503/2013, meant to ease the regulatory process, has made things even more complicated.
Various attempts to resolve this tension by legal and regulatory means have created the most cumbersome and byzantine regulatory system for GM crops in the world.
A major conundrum for the EU is the need to import large quantities of protein‐rich crops such as soybean to supply the continent’s livestock industry with high‐quality feed.In the light of the current Russia–Ukraine situation, which has added a layer of instability to already tense markets, the importance of the global agricultural market to ensure food security is even more pronounced.Given the high adoption rate of GM crops outside the EU, most of these imported commodities inevitably contain GM crops. Under EU law, food and feed products that contain or were produced from GM crops need an import authorisation by the European Commission (EC), which is a lengthy, costly and unpredictable process.In 2002, the EU set up a centralised review system under Regulation (EC) 178/2002 (the General Food Law Regulation) and an independent scientific body to conduct this review: the European Food Safety Authority (EFSA). EFSA is responsible for performing the risk assessment for food and feed regulated products, including GM crops; their advice “opinion” is used by the EC to draft a decision whether or not to authorise import. EU member states then vote whether or not to follow the EC’s draft decision. To date, not a single GM product has received a qualified majority decision for authorisation. The EC then makes the final decision based on EFSA’s risk assessment.There are many reasons why the member states disagree, mostly owing to political and economic agendas. Some members with a large and important agri‐food sector tend to vote in line with EFSA’s opinions, while others consistently vote against authorisation or abstain their vote mainly for political reasons. This ongoing disagreement has made it very difficult to establish an EU‐wide policy for agricultural biotechnology.
…the continuous proliferation, update and reinterpretation of EU requirements means that studies that were conducted in compliance with the guidelines at a particular time may no longer comply with changed requirements…
  相似文献   

10.
Even if the predominant model of science communication with the public is now based on dialogue, many experts still adhere to the outdated deficit model of informing the public. Subject Categories: Genetics, Gene Therapy & Genetic Disease, S&S: History & Philosophy of Science, S&S: Ethics

During the past decades, public communication of science has undergone profound changes: from policy‐driven to policy‐informing, from promoting science to interpreting science, and from dissemination to interaction (Burgess, 2014). These shifts in communication paradigms have an impact on what is expected from scientists who engage in public communication: they should be seen as fellow citizens rather than experts whose task is to increase scientific literacy of the lay public. Many scientists engage in science communication, because they see this as their responsibility toward society (Loroño‐Leturiondo & Davies, 2018). Yet, a significant proportion of researchers still “view public engagement as an activity of talking to rather than with the public” (Hamlyn et al, 2015). The highly criticized “deficit model” that sees the role of experts as educating the public to mitigate skepticism still persists (Simis et al, 2016; Suldovsky, 2016).Indeed, a survey we conducted among experts in training seems to corroborate the persistence of the deficit model even among younger scientists. Based on these results and our own experience with organizing public dialogues about human germline gene editing (Box 1), we discuss the implications of this outdated science communication model and an alternative model of public engagement, that aims to align science with the needs and values of the public.Box 1

The DNA‐dialogue project

The Dutch DNA‐dialogue project invited citizens to discuss and form opinions about human germline gene editing. During 2019 and 2020, this project organized twenty‐seven dialogues with professionals, such as embryologists and midwives, and various lay audiences. Different scenarios of a world in 2039 (https://www.rathenau.nl/en/making‐perfect‐lives/discussing‐modification‐heritable‐dna‐embryos) served as the starting point. Participants expressed their initial reactions to these scenarios with emotion‐cards and thereby explored the values they themselves and other participants deemed important as they elaborated further. Starting each dialogue in this way provides a context that enables everyone to participate in dialogue about complex topics such as human germline gene editing and demonstrates that scientific knowledge should not be a prerequisite to participate.An important example of “different” relevant knowledge surfaced during a dialogue with children between 8 and 12 years in the Sophia Children’s Hospital in Rotterdam (Fig 1). Most adults in the DNA‐dialogues accepted human germline gene modification for severe genetic diseases, as they wished the best possible care and outcome for their children. The children at Sophia, however, stated that they would find it terrible if their parents had altered something about them before they had been born; their parents would not even have known them. Some children went so far to say they would no longer be themselves without their genetic condition, and that their condition had also given them experiences they would rather not have missed.Open in a separate windowFigure 1 Children participating in a DNA‐dialogue meeting. Photographed by Levien Willemse.  相似文献   

11.
The response by the author. Subject Categories: S&S: Economics & Business, S&S: Ethics

I thank Michael Bronstein and Sophia Vinogradov for their interest and comments. I would like to respond to a few of their points.First, I agree with the authors that empirical studies should be conducted to validate any approaches to prevent the spread of misinformation before their implementation. Nonetheless, I think that the ideas I have proposed may be worth further discussion and inspire empirical studies to test their effectiveness.Second, the authors warn that informing about the imperfections of scientific research may undermine trust in science and scientists, which could result in higher vulnerability to online health misinformation (Roozenbeek et al, 2020; Bronstein & Vinogradov, 2021). I believe that transparency about limitations and problems in research does not necessarily have to diminish trust in science and scientists. On the contrary, as Veit et al put it, “such honesty… is a prerequisite for maintaining a trusting relationship between medical institutions (and practitioners) and the public” (Veit et al, 2021). Importantly, to give an honest picture of scientific research, information about its limitations should be put in adequate context. In particular, the public also should be aware that “good science” is being done by many researchers; we do have solid evidence of effectiveness of many medical interventions; and efforts are being taken to address the problems related to quality of research.Third, Bronstein and Vinogradov suggest that false and dangerous information should be censored. I agree with the authors that “[c]ensorship can prevent individuals from being exposed to false and potentially dangerous ideas” (Bronstein & Vinogradov, 2021). I also recognize that some information is false beyond any doubt and its spread may be harmful. What I am concerned about are, among others, the challenges related to defining what is dangerous and false information and limiting censorship only to this kind of information. For example, on what sources should decisions to censor be based and who should make such decisions? Anyone, whether an individual or an organization, with a responsibility to censor information will likely not only be prone to mistakes, but also to abuses of power to foster their interests. Do the benefits we want to achieve by censorship outweigh the potential risks?Fourth, we need rigorous empirical studies examining the actual impact of medical misinformation. What exactly are the harms we try to protect against and what is their scale? This information is necessary to choose proportionte and effective measures to reduce the harms. Bronstein and Vinogradov give an example of a harm which may be caused by misinformation—an increase in methanol poisoning in Iran. Yet, as noticed by the authors, misinformation is not the sole factor in this case; there are also cultural and other contexts (Arasteh et al, 2020; Bronstein & Vinogradov, 2021). Importantly, the methods of studies exploring the effects of misinformation should be carefully elaborated, especially when study participants are asked to self‐report. A recent study suggests that some claims about the prevalence of dangerous behaviors, such as drinking bleach, which may have been caused by misinformation are largely exaggerated due to the presence of problematic respondents in surveys (preprint: Litman et al, 2021).Last but not least, I would like to call attention to the importance of how veracity of information is determined in empirical studies on misinformation. For example, in a study of Roozenbeek et al, cited by Bronstein and Vinogradov, the World Health Organization (WHO) was used as reliable source of information, which raises questions. For instance, Roozenbeek et al (2020) used a statement “the coronavirus was bioengineered in a military lab in Wuhan” as an example of false information, relying on the judgment of the WHO found on its “mythbusters” website (Roozenbeek et al, 2020). Yet, is there a solid evidence to claim that this statement is false? At present, at least some scientists declare that we cannot rule out that the virus was genetically manipulated in a laboratory (Relman, 2020; Segreto & Deigin, 2020). Interestingly, the WHO also no longer excludes such a possibility and has launched an investigation on this issue (https://www.who.int/health‐topics/coronavirus/origins‐of‐the‐virus, https://www.who.int/emergencies/diseases/novel‐coronavirus‐2019/media‐resources/science‐in‐5/episode‐21‐‐‐covid‐19‐‐‐origins‐of‐the‐sars‐cov‐2‐virus); the information about the laboratory origin of the virus being false is no longer present on the WHO “mythbusters” website (https://www.who.int/emergencies/diseases/novel‐coronavirus‐2019/advice‐for‐public/myth‐busters). Against this backdrop, some results of the study by Roozenbeek et al (2020) seem misleading. In particular, the perception of the reliability of the statement about bioengineered virus by study participants in Roozenbeek et al (2020) does not reflect the susceptibility to misinformation, as intended by the researchers, but rather how the respondents perceive reliability of uncertain information.I hope that discussion and research on these and related issues will continue.  相似文献   

12.
Lazy hazy days     
Scientists have warned about the looming climate crisis for decades, but the world has been slow to act. Are we in danger of making a similar mistake, by neglecting the dangers of other climactic catastrophes? Subject Categories: Biotechnology & Synthetic Biology, Economics, Law & Politics, Evolution & Ecology

On one of my trips to Antarctica, I was enjoined to refer not to “global warming” or even to “climate change.” The former implies a uniform and rather benign process, while the second suggests just a transition from one state to another and seems to minimize all the attendant risks to survival. Neither of these terms adequately or accurately describes what is happening to our planet''s climate system as a result of greenhouse gas emissions; not to mention the effects of urbanization, intensive agriculture, deforestation, and other consequences of human population growth. Instead, I was encouraged to use the term “climate disruption,” which embraces the multiplicity of events taking place, some of them still hard to model, that are altering the planetary ecosystem in dramatic ways.With climate disruption now an urgent and undeniable reality, policymakers are finally waking up to the threats that scientists have been warning about for decades. They have accepted the need for action (UNFCCC Conference of the Parties, 2021), even if the commitment remains patchy or lukewarm. But to implement all the necessary changes is a massive undertaking, and it is debatable whether we have enough time left. The fault lies mostly with those who resisted change for so long, hoping the problem would just go away, or denying that it was happening at all. The crisis situation that we face today is because the changes needed simply cannot be executed overnight. It will take time for the infrastructure to be put in place, whether for renewable electricity, for the switch to carbon‐neutral fuels, for sustainable agriculture and construction, and for net carbon capture. If the problems worsen, requiring even more drastic action, at least we do have a direction of travel, though we would be starting off from an even more precarious situation.However, given the time that it has taken—and will still take—to turn around the juggernaut of our industrial society, are we in danger of making the same mistakes all over again, by ignoring the risks of the very opposite process happening in our lifetime? The causes of historic climate cooling are still debated, and though we have fairly convincing evidence regarding specific, sudden events, there is no firm consensus on what is behind longer‐term and possibly cyclical changes in the climate.The two best‐documented examples are the catastrophe of 536–540 AD and the effects of the Laki Haze of 1783–1784. The cause of the 536–540 event is still debated, but is widely believed to have been one or more massive volcanic eruptions that created a global atmospheric dust‐cloud, resulting in a temperature drop of up to 2°C with concomitant famines and societal crises (Toohey et al, 2016; Helama et al, 2018). The Laki Haze was caused by the massive outpouring of sulfurous fumes from the Laki eruption in Iceland. Its effects on the climate, though just as immediate, were less straightforward. The emissions, combined with other meteorological anomalies, produced a disruption of the jetstream, as well as other localized effects. In northwest Europe, the first half of the summer of 1783 was exceptionally hot, but the following winters were dramatically cold, and the mean temperature across much of the northern hemisphere is estimated to have dropped by around 1.3°C for 2–3 years (Thordarson & Self, 2003). In Iceland itself, as well as much of western and northern Europe, the effects were even more devastating, with widespread crop failures and deaths of both livestock and humans exacerbated by the toxicity of the volcanic gases (Schmidt et al, 2011).Other volcanic events in recorded time have produced major climactic disturbances, such as the 1816 Tambora eruption in Indonesia, which resulted in “the year without a summer,” marked by temperature anomalies of up to 4°C (Fasullo et al, 2017), again precipitating worldwide famine. The 1883 Krakatoa eruption produced similar disruption, albeit of a lesser magnitude, though the effects are proposed to have been much longer lasting (Gleckler et al, 2006).Much more scientifically challenging is the so‐called Little Ice Age in the Middle Ages, approximately from 1250 to 1700 AD, when global temperatures were significantly lower than in the preceding and following centuries. It was marked by particularly frigid and prolonged winters in the northern hemisphere. There is no strong consensus as to its cause(s) or even its exact dates; nor even that it can be considered a global‐scale event rather than a summation of several localized phenomena. A volcanic eruption in 1257 with similar effects to the one of 1816 has been suggested as an initiating event. Disruption of the oceanic circulation system resulting from prolonged anomalies in solar activity is another possible explanation (Lapointe & Bradley, 2021). Nevertheless, and despite an average global cooling of < 1°C, the effects on global agriculture, settlement, migration and trade, pandemics such as the Black Death and perhaps even wars and revolutions, were profound.Once or twice in the past century, we have faced devastating wars, tsunamis and pandemics that seemed to come out of the blue and exacted massive tolls on humanity. From the most recent of each of these, there is a growing realization that, although these events are rare and poorly predictable, we can greatly limit the damage if we prepare properly. Devoting a small proportion of our resources over time, we can build the infrastructure and the mechanisms to cope, when these disasters do eventually strike.Without abandoning any of the emergency measures to combat anthropogenic warming, I believe that the risk of climate cooling needs to be addressed in the same way. The infrastructure for burning fossil fuels needs to be mothballed, not destroyed. Carbon capture needs to be implemented in a way that is rapidly reversible, if this should ever be needed. Alternative transportation routes need to be planned and built in case existing ones become impassable due to ice or flooding. Properly insulated buildings are not just a way of saving energy. They are essential for survival in extreme cold, as those of us who live in the Arctic countries are well aware—but many other regions also experience severe winters, for which we should all prepare.Biotechnology needs to be set to work to devise ways of mitigating the effects of sudden climactic events such as the Laki Haze or the Tambora and Krakatoa eruptions, as well as longer‐term phenomena like the Little Ice Age. Could bacteria be used, for example, to detoxify and dissipate a sulfuric aerosol such as the one generated by the Laki eruption? Methane is generally regarded as a major contributor to the greenhouse effect, but it is short‐lived in the atmosphere. So, could methanogens somehow be harnessed to bring about a temporary rise in global temperatures to offset short‐term cooling effects of a volcanic dust‐cloud?We already have a global seed bank in Svalbard (Asdal & Guarino, 2018): It might easily be expanded to include a greater representation of cold‐resistant varieties of the world''s crop plants that might one day be vital to human survival. And, the experience of the Laki Haze indicates a need for varieties capable of withstanding acid rains and other volcanic pollutants, as well as drought and water saturation. An equivalent (embryo) bank for strains of agriculturally important animals potentially threatened by the effects of abrupt cooling of the climate or catastrophic toxification of the atmosphere is also worth considering.It has generally been thought impractical and pointless to prepare for even rarer events, such as cometary impacts, but events that have occurred repeatedly in recorded history and over an even longer time scale (Helama et al, 2021) are likely to happen again. We should and can be better prepared. This is not to say that we should pay attention to every conspiracy theorist or crank, or paid advocates for energy corporations that seek short‐term profits at the expense of long‐term survival, but the dangers of climate disruption of all kinds are too great to ignore. Instead of our current rather one‐dimensional thinking, we need an “all‐risks” approach to the subject: learning from the past and the present to prepare for the future.  相似文献   

13.

In “Structural basis of transport and inhibition of the Plasmodium falciparum transporter PfFNT” by Lyu et al (2021), the authors depict the inhibitor MMV007839 in its hemiketal form in Fig 3A and F, Fig 4C, and Appendix Figs S10A, B and S13. We note that Golldack et al (2017) reported that the linear vinylogous acid tautomer of MMV007839 constitutes the binding and inhibitory entity of PfFNT. The authors are currently obtaining higher resolution cryo‐EM structural data of MMV007839‐bound PfFNT to ascertain which of the interconvertible isoforms is bound and the paper will be updated accordingly.  相似文献   

14.
Open Science calls for transparent science and involvement of various stakeholders. Here are examples of and advice for meaningful stakeholder engagement. Subject Categories: Economics, Law & Politics, History & Philosophy of Science

The concepts of Open Science and Responsible Research and Innovation call for a more transparent and collaborative science, and more participation of citizens. The way to achieve this is through cooperation with different actors or “stakeholders”: individuals or organizations who can contribute to, or benefit from research, regardless of whether they are researchers themselves or not. Examples include funding agencies, citizens associations, patients, and policy makers (https://aquas.gencat.cat/web/.content/minisite/aquas/publicacions/2018/how_measure_engagement_research_saris1_aquas2018.pdf). Such cooperation is even more relevant in the current, challenging times—even apart from a global pandemic—when pseudo‐science, fake news, nihilist attitudes, and ideologies too often threaten social and technological progress enabled by science. Stakeholder engagement in research can inform and empower citizens, help render research more socially acceptable, and enable policies grounded on evidence‐based knowledge. Beyond, stakeholder engagement is also beneficial to researchers and to research itself. In a recent survey, the majority of scientists reported benefits from public engagement (Burns et al, 2021). This can include increased mutual trust and mutual learning, improved social relevance of research, and improved adoption of results and knowledge (Cottrell et al, 2014). Finally, stakeholder engagement is often regarded as an important factor to sustain public investment in the life sciences (Burns et al, 2021).
Stakeholder engagement in research can inform and empower citizens, help render research more socially acceptable and enable policies grounded on evidence‐based knowledge
Here, we discuss different levels of stakeholder engagement by way of example, presenting various activities organized by European research institutions. Based on these experiences, we propose ten reflection points that we believe should be considered by the institutions, the scientists, and the funding agencies to achieve meaningful and impactful stakeholder engagement.  相似文献   

15.
16.
17.
18.
19.

Correction to: The EMBO Journal (2021) 40: e107786. DOI 10.15252/embj.2021107786 | Published online 8 June 2021The authors would like to add three references to the paper: Starr et al and Zahradník et al also reported that the Q498H or Q498R mutation has enhanced binding affinity to ACE2; and Liu et al reported on the binding of bat coronavirus to ACE2.Starr et al and Zahradník et al have now been cited in the Discussion section, and the following sentence has been corrected from:“According to our data, the SARS‐CoV‐2 RBD with Q498H increases the binding strength to hACE2 by 5‐fold, suggesting the Q498H mutant is more ready to interact with human receptor than the wildtype and highlighting the necessity for more strict control of virus and virus‐infected animals”.to“Here, according to our data and two recently published papers, the SARS‐CoV‐2 RBD with Q498H or Q498R increases the binding strength to hACE2 (Starr et al, 2020; Zahradník et al, 2021), suggesting the mutant with Q498H or Q498R is more ready to interact with human receptor than the wild type and highlighting the necessity for more strict control of virus and virus‐infected animals”.The Liu et al citation has been added to the following sentence:“In another paper published by our group recently, RaTG13 RBD was found to bind to hACE2 with much lower binding affinity than SARS‐CoV‐2 though RaTG13 displays the highest whole‐genome sequence identity (96.2%) with the SARS‐CoV‐2 (Liu et al, 2021)”.Additionally, the authors have added the GISAID accession IDs to the sequence names of the SARS‐CoV‐2 in two human samples (Discussion section). To make identification unambiguous, the sequence names have been updated from “SA‐lsf‐27 and SA‐lsf‐37” to “GISAID accession ID: EPI_ISL_672581 and EPI_ISL_672589”.Lastly, the authors declare in the Materials and Methods section that all experiments employed SARS‐CoV‐2 pseudovirus in cultured cells. These experiments were performed in a BSL‐2‐level laboratory and approved by Science and Technology Conditions Platform Office, Institute of Microbiology, Chinese Academy of Sciences.These changes are herewith incorporated into the paper.  相似文献   

20.
The COVID‐19 pandemic has triggered a new bout of anti‐vaccination propaganda. These are often grounded in pseudoscience and misinterpretation of evolutionary biology. Subject Categories: Economics, Law & Politics, Microbiology, Virology & Host Pathogen Interaction, Science Policy & Publishing

Towards the end of summer of 2021, there seemed cause for cautious optimism for putting this pandemic behind us. It was clear that the route of viral transmission was airborne and not via surfaces (Goldman, 2021a), which means that masks are very efficient at reducing the spread of SARS‐CoV‐2. The number of cases in the United States and Europe were declining, and the first vaccines became available with many people lining up to get their jabs. But not all. A significant portion of the population have been refusing to get vaccinated, some of whom were fooled or encouraged by pseudoscientific misinformation propagated on the Internet.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号