首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
3.
4.
5.
Cockell CS 《EMBO reports》2011,12(3):181-181
Our ability to disrupt habitats and manipulate living organisms requires a discussion of the ethics of microbiology, even if we argue that microbes themselves have no rights.Synthetic biology and the increasing complexity of molecular biology have brought us to the stage at which we can synthesize new microorganisms. This has generated pressing questions about whether these new organisms have any place in our system of ethics and how we should treat them.The idea that microbes might have some moral claims on us beyond their practical uses or instrumental value is not a new question. Microbiologist Bernard Dixon (1976) presciently asked whether it was ethical to take the smallpox virus to extinction at the height of the attempts of the World Health Organization in the 1970s to eradicate it. There is no unambiguous answer. Today, we might still ask this question, but we might extend it to ask whether the destruction or extinction of a synthetic microbe that was made by humans is also ethically questionable or is such an entity—in that it is designed—more like a machine, which we have no compunction in terminating? Would two lethal pathogens, one of them synthetic and one of them natural, but otherwise identical, command the same moral claims?In a colloquial way, we might ask whether microbes have rights. In previous papers (Cockell, 2004) I have discussed the ‘rights'' of microbes and further explored some issues about the ethics we apply to them (Cockell, 2008). Julian Davies, in a recent opinion article in EMBO reports (Davies, 2010) described my assertion that they should have constitutional rights as ‘ridiculous''. Although I did suggest that environmental law could be changed to recognize the protection of microbial ecosystems—which would imply statutory rights or protection—nowhere have I claimed that microbes should have ‘constitutional'' rights. Nevertheless, this misattribution provides a useful demonstration of the confusion that exists about exactly how we should treat microbes.Few people are in any doubt that microbes should be conserved for their direct uses to humans, for example, in food and drug production, and their indirect uses such as the crucial role they have in the health of ecosystems. Indeed, these motivations can be used to prioritize microbial conservation and protection efforts (Cockell & Jones, 2009). The crucial question is whether microbes have ‘intrinsic value'' beyond their practical uses. If the answer is ‘no'', then we should have no guilt about deliberately driving microbes to extinction for our benefit. However, there are people who feel uneasy with this conclusion, a feeling that calls forth more complex ethical questions.The question is whether microbes have some sort of ‘interests'' that make demands on our treatment of them that go beyond a mere utilitarian calculation. These arguments themselves question what we define as ‘interests'' and whether interests make demands on us. A microbe has no future plans or thought processes; the sorts of interests that are accepted as being of sufficient scope to place demands on our treatment of other human beings, for instance. However, microbes do have biological interests. A halophilic microbe might eventually die if it is dropped into freshwater. Does our knowledge of what is in the biological interests of a microbe mean that we must show it any consideration beyond practical uses? The answer is not obviously negative (Taylor, 1981), but even if we decide that it is, this does not let us off the hook quite yet.There are other intrinsic value arguments that are more obscure, particularly those around the notion of ‘respect''; the idea that we should show empathy towards the trajectory, however deterministic, of other life forms. These unquantifiable and controversial arguments might, nevertheless, partly explain any unease that we have in watching a group of people smash up and destroy some exquisite microbial mats, just because they were bored.Clearly, human instrumental needs do trump microbes at some level. If they did not, we could not use bleach in our houses, an absurd end-point raised in a 1970s science fiction story that explored the futuristic ramifications of full microbial rights, in which household bleaches and deodorants are banned (Patrouch, 1977).However, we should not be so quick to ridicule ideas about microbial ethics and rights. Although it might be true that phages kill a large percentage of the bacterial population of the world every few days, as Julian Davies points out, human society has achieved an unprecedented capacity for destruction and creation. Our ability to poison and disrupt habitats has been unquantified, with respect to the loss of microbial species. Both synthetic biology and bioterrorism raise the spectre of creating new organisms, including pathogens, which we might need to control or deliberately pursue to extinction. Dixon''s dilemma about the smallpox virus, raised more than 30 years ago, has become an urgent point of discussion in the ethics of molecular biology and microbiology.  相似文献   

6.
The authors of “The anglerfish deception” respond to the criticism of their article.EMBO reports (2012) advanced online publication; doi: 10.1038/embor.2012.70EMBO reports (2012) 13 2, 100–105; doi: 10.1038/embor.2011.254Our respondents, eight current or former members of the EFSA GMO panel, focus on defending the EFSA''s environmental risk assessment (ERA) procedures. In our article for EMBO reports, we actually focused on the proposed EU GMO legislative reform, especially the European Commission (EC) proposal''s false political inflation of science, which denies the normative commitments inevitable in risk assessment (RA). Unfortunately the respondents do not address this problem. Indeed, by insisting that Member States enjoy freedom over risk management (RM) decisions despite the EFSA''s central control over RA, they entirely miss the relevant point. This is the unacknowledged policy—normative commitments being made before, and during, not only after, scientific ERA. They therefore only highlight, and extend, the problem we identified.The respondents complain that we misunderstood the distinction between RA and RM. We did not. We challenged it as misconceived and fundamentally misleading—as though only objective science defined RA, with normative choices cleanly confined to RM. Our point was that (i) the processes of scientific RA are inevitably shaped by normative commitments, which (ii) as a matter of institutional, policy and scientific integrity must be acknowledged and inclusively deliberated. They seem unaware that many authorities [1,2,3,4] have recognized such normative choices as prior matters, of RA policy, which should be established in a broadly deliberative manner “in advance of risk assessment to ensure that [RA] is systematic, complete, unbiased and transparent” [1]. This was neither recognized nor permitted in the proposed EC reform—a central point that our respondents fail to recognize.In dismissing our criticism that comparative safety assessment appears as a ‘first step'' in defining ERA, according to the new EFSA ERA guidelines, which we correctly referred to in our text but incorrectly referenced in the bibliography [5], our respondents again ignore this widely accepted ‘framing'' or ‘problem formulation'' point for science. The choice of comparator has normative implications as it immediately commits to a definition of what is normal and, implicitly, acceptable. Therefore the specific form and purpose of the comparison(s) is part of the validity question. Their claim that we are against comparison as a scientific step is incorrect—of course comparison is necessary. This simply acts as a shield behind which to avoid our and others'' [6] challenge to their self-appointed discretion to define—or worse, allow applicants to define—what counts in the comparative frame. Denying these realities and their difficult but inevitable implications, our respondents instead try to justify their own particular choices as ‘science''. First, they deny the first-step status of comparative safety assessment, despite its clear appearance in their own ERA Guidance Document [5]—in both the representational figure (p.11) and the text “the outcome of the comparative safety assessment allows the determination of those ‘identified'' characteristics that need to be assessed [...] and will further structure the ERA” (p.13). Second, despite their claims to the contrary, ‘comparative safety assessment'', effectively a resurrection of substantial equivalence, is a concept taken from consumer health RA, controversially applied to the more open-ended processes of ERA, and one that has in fact been long-discredited if used as a bottleneck or endpoint for rigorous RA processes [7,8,9,10]. The key point is that normative commitments are being embodied, yet not acknowledged, in RA science. This occurs through a range of similar unaccountable RA steps introduced into the ERA Guidance, such as judgement of ‘biological relevance'', ‘ecological relevance'', or ‘familiarity''. We cannot address these here, but our basic point is that such endless ‘methodological'' elaborations of the kind that our EFSA colleagues perform, only obscure the institutional changes needed to properly address the normative questions for policy-engaged science.Our respondents deny our claim concerning the singular form of science the EC is attempting to impose on GM policy and debate, by citing formal EFSA procedures for consultations with Member States and non-governmental organizations. However, they directly refute themselves by emphasizing that all Member State GM cultivation bans, permitted only on scientific grounds, have been deemed invalid by EFSA. They cannot have it both ways. We have addressed the importance of unacknowledged normativity in quality assessments of science for policy in Europe elsewhere [11]. However, it is the ‘one door, one key'' policy framework for science, deriving from the Single Market logic, which forces such singularity. While this might be legitimate policy, it is not scientific. It is political economy.Our respondents conclude by saying that the paramount concern of the EFSA GMO panel is the quality of its science. We share this concern. However, they avoid our main point that the EC-proposed legislative reform would only exacerbate their problem. Ignoring the normative dimensions of regulatory science and siphoning-off scientific debate and its normative issues to a select expert panel—which despite claiming independence faces an EU Ombudsman challenge [12] and European Parliament refusal to discharge their 2010 budget, because of continuing questions over conflicts of interests [13,14]—will not achieve quality science. What is required are effective institutional mechanisms and cultural norms that identify, and deliberatively address, otherwise unnoticed normative choices shaping risk science and its interpretive judgements. It is not the EFSA''s sole responsibility to achieve this, but it does need to recognize and press the point, against resistance, to develop better EU science and policy.  相似文献   

7.
The differentiation of pluripotent stem cells into various progeny is perplexing. In vivo, nature imposes strict fate constraints. In vitro, PSCs differentiate into almost any phenotype. Might the concept of ‘cellular promiscuity'' explain these surprising behaviours?John Gurdon''s [1] and Shinya Yamanaka''s [2] Nobel Prize involves discoveries that vex fundamental concepts about the stability of cellular identity [3,4], ageing as a rectified path and the differences between germ cells and somatic cells. The differentiation of pluripotent stem cells (PSCs) into progeny, including spermatids [5] and oocytes [6], is perplexing. In vivo, nature imposes strict fate constraints. Yet in vitro, reprogrammed PSCs liberated from the body government freely differentiate into any phenotype—except placenta—violating even somatic cell against germ cell segregations. Albeit that it is anthropomorphic, might the concept of ‘cellular promiscuity'' explain these surprising behaviours?Fidelity to one''s differentiated state is nearly universal in vivo—even cancers retain some allegiance. Appreciating the mechanisms in vitro that liberate reprogrammed cells from the numerous constraints governing development in vivo might provide new insights. Similarly to highway guiderails, a range of constraints preclude progeny cells within embryos and organisms from travelling too far away from the trajectory set by their ancestors. Restrictions are imposed externally—basement membranes and intercellular adhesions; internally—chromatin, cytoskeleton, endomembranes and mitochondria; and temporally by ageing.‘Cellular promiscuity'' was glimpsed previously during cloning; it was seen when somatic cells successfully ‘fertilized'' enucleated oocytes in amphibians [1] and later with ‘Dolly'' [7]. Embryonic stem cells (ESCs) corroborate this. The inner cell mass of the blastocyst cells develops faithfully, but liberation from the trophoectoderm generates pluripotent ESCs in vitro, which are freed from fate and polarity restrictions. These freedom-seeking ESCs still abide by three-dimensional rules as they conform to chimaera body patterning when injected into blastocysts. Yet if transplanted elsewhere, this results in chaotic teratomas or helter-skelter in vitro differentiation—that is, pluripotency.August Weismann''s germ plasm theory, 130 years ago, recognized that gametes produce somatic cells, never the reverse. Primordial germ cell migrations into fetal gonads, and parent-of-origin imprints, explain how germ cells are sequestered, retaining genomic and epigenomic purity. Left uncontaminated, these future gametes are held in pristine form to parent the next generation. However, the cracks separating germ and somatic lineages in vitro are widening [5,6]. Perhaps, they are restrained within gonads not for their purity but to prevent wild, uncontrolled misbehaviours resulting in germ cell tumours.The ‘cellular promiscuity'' concept regarding PSCs in vitro might explain why cells of nearly any desired lineage can be detected using monospecific markers. Are assays so sensitive that rare cells can be detected in heterogeneous cultures? Certainly population heterogeneity is considered for transplantable cells—dopaminergic neurons and islet cells—compared with applications needing few cells—sperm and oocytes. This dilemma of maintaining cellular identity in vitro after reprogramming is significant. If not addressed, the value of unrestrained induced PSCs (iPSCs) as reliable models for ‘diseases in a dish'', let alone for subsequent therapeutic transplantations, might be diminished. X-chromosome re-inactivation variants in differentiating human PSCs, epigenetic imprint errors and copy number variations are all indicators of in vitro infidelity. PSCs, which are held to be undifferentiated cells, are artefacts after all, as they undergo their programmed development in vivo.If correct, the hypothesis accounts for concerns raised about the inherent genomic and epigenomic unreliability of iPSCs; they are likely to be unfaithful to their in vivo differentiation trajectories due to both the freedom from in vivo developmental programmes, as well as poorly characterized modifications in culture conditions. ‘Memory'' of the PSC''s identity in vivo might need to be improved by using approaches that might not fully erase imprints. Regulatory authorities, including the Food & Drug Administration, require evidence that cultured PSCs do retain their original cellular identity. Notwithstanding fidelity lapses at the organismal level, the recognition that our cells have intrinsic freedom-loving tendencies in vitro might generate better approaches for only partly releasing somatic cells into probation, rather than full emancipation.  相似文献   

8.
Denis Duboule 《EMBO reports》2010,11(7):489-489
Where is ‘evo-devo'' going and how will it get there? Denis Duboule analyses the fields of evolution and development and argues that their current marriage is likely a transitory affair.In his inspiring book Ontogeny and Phylogeny (1977), the late Stephen J. Gould explained why developmental biology and evolution, two essential domains of the life sciences, had diverged during the course of the twentieth century; both disciplines had to reach independently a platform of mutual understanding, a theoretical framework wherein concepts are understood and accepted by both parties. A step towards this goal was achieved in the 1980s with the discovery that animals not only share similar ‘developmental genes'', but also more integrated structural and functional aspects of their ontogenies (McGinnis et al, 1984; Akam, 1989). While these advances opened the door towards a molecular understanding of development, the analyses of gene expression in various species also allowed for the establishment of correlations between genetic activities and evolving forms.This comparative approach triggered the emergence of novel animal models and generated a portfolio of concepts, which now-a-days form the basis of a discipline sometimes referred to as ‘evo-devo'' (Wallace, 2002; Carroll, 2005; de Robertis, 2008). The frontiers of this field, however, are not clearly defined. Evo-devo research extends from simply ‘PCRing'' a trendy gene from a weird animal, up to the most sophisticated molecular genetic approaches dealing with the evolution of gene function and regulation. Yet the experiments are always within the general context of homology, as understood by using either morphological, functional or regulatory criteria, indiscriminately.With our improved knowledge of the mechanisms underlying animal development, we can now address the question of natural variation; we have learnt, for instance, that rather limited sets of genes and signalling pathways are used over and over again, hence the development of most organs or structures relies on comparable rules. This, in turn, implies that developing systems have highly constrained roadmaps, the modifications of which lead generally to pleiotropic effects (Duboule & Wilkins 1998; Kirschner & Gerhart, 2005). This natural parsimony in the use of genetic tools makes it sometimes difficult to infer a conservation in function from the mere conservation of gene expression patterns—for instance, between two evolutionary distant animals—and thus calls for a deeper level of conservation to ascertain such phylogenetic relationships.This issue can be addressed either by a thorough understanding of those regulations at work, assuming that a conservation of regulatory circuits demonstrates a common phylogenetic history, or by a large survey of various species, should we accept that a robust association between gene expression and a particular trait bears an evolutionary meaning. The latter point raises the paradox of model systems: that is, whether general conclusions can be extracted from given biological items, which themselves were often chosen for study owing to particularly well-adapted features, rather than for their elusive paradigmatic value. In other words, will we ever understand the full set of core principles by working exclusively with adaptive traits that intrinsically tend to distract from these rules? While this issue is somewhat theoretical, the popular idea that some species can display advantages over others, in terms of experimental benefit, indicates clearly that such questions have not yet been discussed in sufficient depth.The lack of a clear definition of what evo-devo covers as a discipline is echoed by the difficulty to elaborate a commonly accepted set of guidelines, mostly owing to the conflicting ménage between developmental geneticists on the one hand, with their mindset inherited from T. Morgan and H. Spemann, and population (evolutionary) geneticists on the other hand, the direct descents of the new synthesis. In fact, we face a modern version of the classical dichotomy between variation—the ‘how'' question—and selection—the ‘why'' question—and we may wonder how long this productive relationship will last. While it might consolidate itself and lead to an integrated theory of evolution that includes the emerging mechanistic side, it could well split again into divergent trajectories, like a comet that returns closer to a planet every hundred years to fill itself with concepts and energy before leaving again for yet another journey.Evo-devo is arguably a transitory discipline. We are witnessing the emergence of a new developmental biology, relying on high-throughput approaches, systems analyses and modelling to use gene (information) clusters, or even full genomes, as we currently use single genes. The accompanying shift in the required competencies—for example, bioinformatics, physics and maths—although of great interest mechanistically speaking, does not necessarily strengthen the link with the genetic framework of evolution. Also, we should remember that evolution and development are disciplines built on different epistemological grounds, which bring to their fusion an unstable equilibrium. Development is a science of recurrence; based on the assumption that the same process will happen again, in each generation, leading to results that we can predict. As such, it has a fixed timeframe. Evolution relies on the exact opposite premises; it is by definition a linear process, wherefrom recurrence is impossible. It has no clear timeframe and (so far) no predictable result. The former discipline explains how things happen, the latter how things most likely happened.This theoretical antagonism might nevertheless become obsolete once the mechanisms of development are fully understood and once the computation of various ontogenetic roadmaps will discriminate the possible from the impossible, thus telling us which form could evolve out of a given species. This will primarily concern macro-evolution, as micro-evolutionary phenomena are probably less constrained and, as such, more difficult to anticipate. If this were true, one should be able to predict with some accuracy the few alternative solutions offered to one particular species for the next million years, especially if environmental conditions can also be predicted. In such a scenario, the next rendezvous with the comet will turn evolution into a predictive science. This may indeed take another century.  相似文献   

9.
Greener M 《EMBO reports》2008,9(11):1067-1069
A consensus definition of life remains elusiveIn July this year, the Phoenix Lander robot—launched by NASA in 2007 as part of the Phoenix mission to Mars—provided the first irrefutable proof that water exists on the Red Planet. “We''ve seen evidence for this water ice before in observations by the Mars Odyssey orbiter and in disappearing chunks observed by Phoenix […], but this is the first time Martian water has been touched and tasted,” commented lead scientist William Boynton from the University of Arizona, USA (NASA, 2008). The robot''s discovery of water in a scooped-up soil sample increases the probability that there is, or was, life on Mars.Meanwhile, the Darwin project, under development by the European Space Agency (ESA; Paris, France; www.esa.int/science/darwin), envisages a flotilla of four or five free-flying spacecraft to search for the chemical signatures of life in 25 to 50 planetary systems. Yet, in the vastness of space, to paraphrase the British astrophysicist Arthur Eddington (1822–1944), life might be not only stranger than we imagine, but also stranger than we can imagine. The limits of our current definitions of life raise the possibility that we would not be able to recognize an extra-terrestrial organism.Back on Earth, molecular biologists—whether deliberately or not—are empirically tackling the question of what is life. Researchers at the J Craig Venter Institute (Rockville, MD, USA), for example, have synthesized an artificial bacterial genome (Gibson et al, 2008). Others have worked on ‘minimal cells'' with the aim of synthesizing a ‘bioreactor'' that contains the minimum of components necessary to be self-sustaining, reproduce and evolve. Some biologists regard these features as the hallmarks of life (Luisi, 2007). However, to decide who is first in the ‘race to create life'' requires a consensus definition of life itself. “A definition of the precise boundary between complex chemistry and life will be critical in deciding which group has succeeded in what might be regarded by the public as the world''s first theology practical,” commented Jamie Davies, Professor of Experimental Anatomy at the University of Edinburgh, UK.For most biologists, defining life is a fascinating, fundamental, but largely academic question. It is, however, crucial for exobiologists looking for extra-terrestrial life on Mars, Jupiter''s moon Europa, Saturn''s moon Titan and on planets outside our solar system.In their search for life, exobiologists base their working hypothesis on the only example to hand: life on Earth. “At the moment, we can only assume that life elsewhere is based on the same principles as on Earth,” said Malcolm Fridlund, Secretary for the Exo-Planet Roadmap Advisory Team at the ESA''s European Space Research and Technology Centre (Noordwijk, The Netherlands). “We should, however, always remember that the universe is a peculiar place and try to interpret unexpected results in terms of new physics and chemistry.”The ESA''s Darwin mission will, therefore, search for life-related gases such as carbon dioxide, water, methane and ozone in the atmospheres of other planets. On Earth, the emergence of life altered the balance of atmospheric gases: living organisms produced all of the Earth'' oxygen, which now accounts for one-fifth of the atmosphere. “If all life on Earth was extinguished, the oxygen in our atmosphere would disappear in less than 4 million years, which is a very short time as planets go—the Earth is 4.5 billion years old,” Fridlund said. He added that organisms present in the early phases of life on Earth produced methane, which alters atmospheric composition compared with a planet devoid of life.Although the Darwin project will use a pragmatic and specific definition of life, biologists, philosophers and science-fiction authors have devised numerous other definitions—none of which are entirely satisfactory. Some are based on basic physiological characteristics: a living organism must feed, grow, metabolize, respond to stimuli and reproduce. Others invoke metabolic definitions that define a living organism as having a distinct boundary—such as a membrane—which facilitates interaction with the environment and transfers the raw materials needed to maintain its structure (Wharton, 2002). The minimal cell project, for example, defines cellular life as “the capability to display a concert of three main properties: self-maintenance (metabolism), reproduction and evolution. When these three properties are simultaneously present, we will have a full fledged cellular life” (Luisi, 2007). These concepts regard life as an emergent phenomenon arising from the interaction of non-living chemical components.Cryptobiosis—hidden life, also known as anabiosis—and bacterial endospores challenge the physiological and metabolic elements of these definitions (Wharton, 2002). When the environment changes, certain organisms are able to undergo cryptobiosis—a state in which their metabolic activity either ceases reversibly or is barely discernible. Cryptobiosis allows the larvae of the African fly Polypedilum vanderplanki to survive desiccation for up to 17 years and temperatures ranging from −270 °C (liquid helium) to 106 °C (Watanabe et al, 2002). It also allows the cysts of the brine shrimp Artemia to survive desiccation, ultraviolet radiation, extremes of temperature (Wharton, 2002) and even toyshops, which sell the cysts as ‘sea monkeys''. Organisms in a cryptobiotic state show characteristics that vary markedly from what we normally consider to be life, although they are certainly not dead. “[C]ryptobiosis is a unique state of biological organization”, commented James Clegg, from the Bodega Marine Laboratory at the University of California (Davies, CA, USA), in an article in 2001 (Clegg, 2001). Bacterial endospores, which are the “hardiest known form of life on Earth” (Nicholson et al, 2000), are able to withstand almost any environment—perhaps even interplanetary space. Microbiologists isolated endospores of strict thermophiles from cold lake sediments and revived spores from samples some 100,000 years old (Nicholson et al, 2000).…life might be not only stranger than we imagine, but also stranger than we can imagineAnother problem with the definitions of life is that these can expand beyond biology. The minimal cell project, for example, in common with most modern definitions of life, encompass the ability to undergo Darwinian evolution (Wharton, 2002). “To be considered alive, the organism needs to be able to undergo extensive genetic modification through natural selection,” said Professor Paul Freemont from Imperial College London, UK, whose research interests encompass synthetic biology. But the virtual ‘organisms'' in computer simulations such as the Game of Life (www.bitstorm.org/gameoflife) and Tierra (http://life.ou.edu/tierra) also exhibit life-like characteristics, including growth, death and evolution—similar to robots and other artifical systems that attempt to mimic life (Guruprasad & Sekar, 2006). “At the moment, we have some problems differentiating these approaches from something biologists consider [to be] alive,” Fridlund commented.…to decide who is first in the ‘race to create life'' requires a consensus definition of lifeBoth the genetic code and all computer-programming languages are means of communicating large quantities of codified information, which adds another element to a comprehensive definition of life. Guenther Witzany, an Austrian philosopher, has developed a “theory of communicative nature” that, he claims, differentiates biotic and abiotic life. “Life is distinguished from non-living matter by language and communication,” Witzany said. According to his theory, RNA and DNA use a ‘molecular syntax'' to make sense of the genetic code in a manner similar to language. This paragraph, for example, could contain the same words in a random order; it would be meaningless without syntactic and semantic rules. “The RNA/DNA language follows syntactic, semantic and pragmatic rules which are absent in [a] random-like mixture of nucleic acids,” Witzany explained.Yet, successful communication requires both a speaker using the rules and a listener who is aware of and can understand the syntax and semantics. For example, cells, tissues, organs and organisms communicate with each other to coordinate and organize their activities; in other words, they exchange signals that contain meaning. Noradrenaline binding to a β-adrenergic receptor in the bronchi communicates a signal that says ‘dilate''. “If communication processes are deformed, destroyed or otherwise incorrectly mediated, both coordination and organisation of cellular life is damaged or disturbed, which can lead to disease,” Witzany added. “Cellular life also interprets abiotic environmental circumstances—such as the availability of nutrients, temperature and so on—to generate appropriate behaviour.”Nonetheless, even definitions of life that include all the elements mentioned so far might still be incomplete. “One can make a very complex definition that covers life on the Earth, but what if we find life elsewhere and it is different? My opinion, shared by many, is that we don''t have a clue of how life arose on Earth, even if there are some hypotheses,” Fridlund said. “This underlies many of our problems defining life. Since we do not have a good minimum definition of life, it is hard or impossible to find out how life arose without observing the process. Nevertheless, I''m an optimist who believes the universe is understandable with some hard work and I think we will understand these issues one day.”Both synthetic biology and research on organisms that live in extreme conditions allow biologists to explore biological boundaries, which might help them to reach a consensual minimum definition of life, and understand how it arose and evolved. Life is certainly able to flourish in some remarkably hostile environments. Thermus aquaticus, for example, is metabolically optimal in the springs of Yellowstone National Park at temperatures between 75 °C and 80 °C. Another extremophile, Deinococcus radiodurans, has evolved a highly efficient biphasic system to repair radiation-induced DNA breaks (Misra et al, 2006) and, as Fridlund noted, “is remarkably resistant to gamma radiation and even lives in the cooling ponds of nuclear reactors.”In turn, synthetic biology allows for a detailed examination of the elements that define life, including the minimum set of genes required to create a living organism. Researchers at the J Craig Venter Institute, for example, have synthesized a 582,970-base-pair Mycoplasma genitalium genome containing all the genes of the wild-type bacteria, except one that they disrupted to block pathogenicity and allow for selection. ‘Watermarks'' at intergenic sites that tolerate transposon insertions identify the synthetic genome, which would otherwise be indistinguishable from the wild type (Gibson et al, 2008).Yet, as Pier Luigi Luisi from the University of Roma in Italy remarked, even M. genitalium is relatively complex. “The question is whether such complexity is necessary for cellular life, or whether, instead, cellular life could, in principle, also be possible with a much lower number of molecular components”, he said. After all, life probably did not start with cells that already contained thousands of genes (Luisi, 2007).…researchers will continue their attempts to create life in the test tube—it is, after all, one of the greatest scientific challengesTo investigate further the minimum number of genes required for life, researchers are using minimal cell models: synthetic genomes that can be included in liposomes, which themselves show some life-like characteristics. Certain lipid vesicles are able to grow, divide and grow again, and can include polymerase enzymes to synthesize RNA from external substrates as well as functional translation apparatuses, including ribosomes (Deamer, 2005).However, the requirement that an organism be subject to natural selection to be considered alive could prove to be a major hurdle for current attempts to create life. As Freemont commented: “Synthetic biologists could include the components that go into a cell and create an organism [that is] indistinguishable from one that evolved naturally and that can replicate […] We are beginning to get to grips with what makes the cell work. Including an element that undergoes natural selection is proving more intractable.”John Dupré, Professor of Philosophy of Science and Director of the Economic and Social Research Council (ESRC) Centre for Genomics in Society at the University of Exeter, UK, commented that synthetic biologists still approach the construction of a minimal organism with certain preconceptions. “All synthetic biology research assumes certain things about life and what it is, and any claims to have ‘confirmed'' certain intuitions—such as life is not a vital principle—aren''t really adding empirical evidence for those intuitions. Anyone with the opposite intuition may simply refuse to admit that the objects in question are living,” he said. “To the extent that synthetic biology is able to draw a clear line between life and non-life, this is only possible in relation to defining concepts brought to the research. For example, synthetic biologists may be able to determine the number of genes required for minimal function. Nevertheless, ‘what counts as life'' is unaffected by minimal genomics.”Partly because of these preconceptions, Dan Nicholson, a former molecular biologist now working at the ESRC Centre, commented that synthetic biology adds little to the understanding of life already gained from molecular biology and biochemistry. Nevertheless, he said, synthetic biology might allow us to go boldly into the realms of biological possibility where evolution has not gone before.An engineered synthetic organism could, for example, express novel amino acids, proteins, nucleic acids or vesicular forms. A synthetic organism could use pyranosyl-RNA, which produces a stronger and more selective pairing system than the natural existent furanosyl-RNA (Bolli et al, 1997). Furthermore, the synthesis of proteins that do not exist in nature—so-called never-born proteins—could help scientists to understand why evolutionary pressures only selected certain structures.As Luisi remarked, the ratio between the number of theoretically possible proteins containing 100 amino acids and the real number present in nature is close to the ratio between the space of the universe and the space of a single hydrogen atom, or the ratio between all the sand in the Sahara Desert and a single grain. Exploring never-born proteins could, therefore, allow synthetic biologists to determine whether particular physical, structural, catalytic, thermodynamic and other properties maximized the evolutionary fitness of natural proteins, or whether the current protein repertoire is predominately the result of chance (Luisi, 2007).In the final analysis, as with all science, deep understanding is more important than labelling with words.“Synthetic biology also could conceivably help overcome the ‘n = 1 problem''—namely, that we base biological theorising on terrestrial life only,” Nicholson said. “In this way, synthetic biology could contribute to the development of a more general, broader understanding of what life is and how it might be defined.”No matter the uncertainties, researchers will continue their attempts to create life in the test tube—it is, after all, one of the greatest scientific challenges. Whether or not they succeed will depend partly on the definition of life that they use, though in any case, the research should yield numerous insights that are beneficial to biologists generally. “The process of creating a living system from chemical components will undoubtedly offer many rich insights into biology,” Davies concluded. “However, the definition will, I fear, reflect politics more than biology. Any definition will, therefore, be subject to a lot of inter-lab political pressure. Definitions are also important for bioethical legislation and, as a result, reflect larger politics more than biology. In the final analysis, as with all science, deep understanding is more important than labelling with words.”  相似文献   

10.
Biopedagogy     
The world is changing fast and teachers are struggling to familiarize children and young people with the norms and values of society. Biopedagogy—the biology behind pedagogy approaches—might provide some insights and guidance.Humans are the only animals that are subject to cumulative cultural evolution. Biological evolution would be too slow to allow for the invention and continuous improvement of complex artefacts, both material and abstract. One of the mechanisms that makes it possible for humans to pass on social, cultural and technological advances over generations is teaching. Some scholars believe that teaching itself might be an exclusive human trait; others argue that teaching is more prevalent in nature, if it is defined as cooperative behaviour that promotes learning, independent of the mental states and cognitive intentions [1].The science, art and profession of teaching is known as ‘pedagogy'' and its biological basis might well be dubbed ‘biopedagogy''. Pedagogy embeds the learner into a particular culture by exposing the developing mind to cultural values and practices. Human teaching represents two disparate but closely linked activities—education and instruction. To ‘educate'' means to unfold the latent potential of the learner and to cultivate human nature by promoting impulses that conform to a culture and inhibiting those that contradict it. ‘Instruction'' is the provision of knowledge and skills.According to the ethologist Konrad Lorenz, human nature, which is a result of biological evolution, functions as the “inborn schoolmaster” [2] by both allowing and constraining the learning of all the products of cultural evolution. Lorenz received the Nobel Prize for his discovery of ‘imprinting'': the irreversible, life-long fixation of a response to a situation encountered by an organism during development. Imprinting is not specific to humans, but humans have evolved, along with the formation of the central nervous system, more sophisticated mental organs that we call the social brain, the group mind and the darwinian soul. As the physical development of an organism proceeds in stages, so does the mental development. Some of these stages represent critical periods that are particularly sensitive to imprinting. The rapid acquisition of the mother tongue for instance apparently represents such a specific period in human development.According to Lorenz, during and shortly after puberty, humans are prone to a specific kind of imprinting from a culture and its abstract norms and values, driven by a need to become members of a reference group striving for a common ideal [2]. We might call this developmental stage of humans the ‘second (or ideational) imprinting''. This imprinting presupposes a stable society with firmly established norms and values and, in turn, it serves to ensure that stability. The British neuroscientist Sarah-Jayne Blakemore [3] corroborated that the human brain undergoes protracted development and demonstrated that adolescence, in particular, represents a period during which the neuronal basis of the social brain reorganizes. This provides opportunities, but also imposes great responsibility to high-school and university teachers.Jan Amos Comenius, a seventeenth century Moravian educator, already suggested that the mastery of teaching consists in recognizing stages of mental development in which a student is prepared and eager to learn stage-specific knowledge spontaneously. In his view, a teacher is more similar to a gardener, who gives plants care and nutrients to allow them to develop, grow and flourish. Comenius also anticipated the crucial role of positive emotions in pedagogy. His commandment Scola ludus—The School of Play—expresses the fact that teaching and learning can, and should be, associated with pleasure and joy by both teachers and students.Human nature evolved during the Pleistocene about 1.8 million to 10,000 years ago to cope with a hunter–gatherer lifestyle. Yet, modern humans live in and adapt to vastly different environments created by cultural evolution. Apparently, the human genetic outfit is highly versatile and encompasses abstract ‘cultural loci''. Such a cultural locus is an ‘empty slot'' that functions only when it is filled with a meme from the cultural environment; memes would be akin to ‘alleles'' that are specific to a particular cultural locus. This cross-talk of biology and culture makes humans symbolic animals—our social brain allows us to behave altruistically towards ‘symbolic kin'' with whom we share no genetic relationship; and our group mind embraces not only our relatives and friends, but also our tribe or nation and possibly humanity as a whole. Education, and in particular imprinting, essentially determines the extent, quality and scope of this deployment.A developing child has to pass all crucial periods of learning successfully to become a mature human—and humane—being. As Lorenz noted, once a sensitive period has elapsed and the opportunity to learn has been missed, the ability to catch up is considerably reduced or irreversibly lost. We live in a time when cultural values and norms are rapidly changing, often within less than a generation. The ideational imprinting of developing adolescents becomes a problem when the traditional role of family and school is displaced by new social forces such as the internet, Facebook, Twitter and the blogosphere. How to preclude that young people do not develop as persons with stunted social brains, with narrow group minds attuned to fleeting reference groups and with fragmented darwinian souls, and how to promote the development of strong personalities is a challenge for the education of the twenty-first century.  相似文献   

11.
Paul van Helden 《EMBO reports》2012,13(11):942-942
We tend to think in black and white terms of good versus bad alleles and their meaning for disease. However, in doing so, we ignore the potential importance of heterozygous alleles.The structure and function of any protein is determined by its amino acid sequence. Thus, the substitution of one amino acid for another can alter the activity of a protein or its function. Mutations—or rather, polymorphism, once they become fixed in the population—can be deleterious, such that the altered protein is no longer able to fulfil its role with potentially devastating effects on the cell. Rarely, they can improve protein function and cell performance. In either case, any changes in the amino acid sequence, whether they affect only one amino acid or larger parts of the protein, are encoded by polymorphisms in the nucleotide sequence of that protein''s gene. For any given polymorphism, diploid organisms with two sets of chromosomes can therefore exist in either a heterozygous state or one of two homozygous states. When the polymorphism is rare, most individuals are homozygous for the ‘wild-type'' state, some individuals are heterozygous and a few are homozygous for the rare polymorphic variant. Conversely, if the polymorphism occurs in 50% of the alleles, the heterozygous state is common.At first glance, the deleterious homozygous state seems to be something that organisms try to avoid: close relatives usually do not breed, probably to prevent the homozygous accumulation of deleterious alleles. Thus, human cultural norms, founded in our biology, actively select for heterozygosity as many civilizations and societies regard incest as a social taboo. The fields of animal husbandry and conservation biology are littered with information about the significant positive correlation between genetic diversity, evolutionary advantage and fitness [1]. In sexually reproducing organisms, heterozygosity is generally regarded as ‘better'' in terms of adaptability and evolutionary advantage.Why then do we seldom, if ever, regard allelic heterozygosity as an advantage when it comes to genes linked with health and disease? Perhaps it is because we tend to distinguish between the ‘good'' allele, the ‘bad'' allele and the ‘ugly'' heterozygote—since it is burdened with one ‘bad'' allele. Maybe this attitude is a remnant of the outdated ‘one gene, one disease'' model, or of the early studies on inheritable diseases that focused on monogenic or autosomal-dominant genetic disorders. Even modern genetics almost always assigns ‘risk'' to an allele that is associated with a health condition or disadvantaged phenotype; clearly, then, the one homozygous state must have an advantage—sometimes referred to as wild-type—but the heterozygote is often ignored altogether.Maybe we also shun heterozygosity because it is hard to prove, beyond a few examples, that it might offer advantage. A 2010 paper published in Cell claimed that heterozygosity of the lth4A locus conveys protection against tuberculosis [2]. There is a mechanistic basis for the claim: lth4A encodes leukotriene A4 hydrolase, which is the final catalyst to synthesize leukotriene B4, an efficient pro-inflammatory eicosanoid. However, an extensive case–control study could not confirm the association between heterozygosity and protection against tuberculosis [3]. Therefore, many in the field dismiss the prior claim to protection conferred by the heterozygous state.Yet, we know that most biochemical and physiological processes are highly complex systems that involve multiple, interlinked steps with extensive control and feedback mechanisms. Heterozygosity might be one strategy by which an organism maintains flexibility, as it provides more than one allele to fall back on, should conditions change. We may therefore hypothesize that heterozygosity can be either a risk or an advantage, depending on the penetrance or dominance of the alleles. Indeed, there are a few cases in which heterozygosity confers some advantage. For example, individuals who are homozygous for the CCR5 deletion polymorphism (D32/D32) are protected against HIV1 infection, whereas CCR5/D32 heterozygotes have a slower progression to acquired immunodeficiency syndrome (AIDS). In sickle-cell anaemia, heterozygotes have a protective advantage against malaria, whereas the homozygotes either lack protection or suffer health consequences. Thus, although heterozygosity might not create a general fitness advantage, it is advantageous under certain specific conditions, namely the presence of the malaria parasite.In most aspects of life, there are few absolutes and many shades of grey. The ‘normal'' range of parameters in medicine is a clear example of this: optimal functioning of the relevant physiological processes depends on levels that are ‘just right''. As molecular and genetic research tackles the causes and risk factors of complex diseases, we may perhaps find more examples of how heterozygosity at the genetic level conveys health advantages in humans. As the above example regarding tuberculosis indicates, it is difficult to demonstrate any advantage of the heterozygous state. We simply need to be receptive to such possibilities, and improve and reconcile our understanding of allelic diversity and heterozygosity. Researchers working on human disease could benefit from the insights of evolutionary biologists and breeders, who are more appreciative of the heterozygous state.  相似文献   

12.
Lessons from science studies for the ongoing debate about ‘big'' versus ‘little'' research projectsDuring the past six decades, the importance of scientific research to the developed world and the daily lives of its citizens has led many industrialized countries to rebrand themselves as ‘knowledge-based economies''. The increasing role of science as a main driver of innovation and economic growth has also changed the nature of research itself. Starting with the physical sciences, recent decades have seen academic research increasingly conducted in the form of large, expensive and collaborative ‘big science'' projects that often involve multidisciplinary, multinational teams of scientists, engineers and other experts.Although laboratory biology was late to join the big science trend, there has nevertheless been a remarkable increase in the number, scope and complexity of research collaborations…Although laboratory biology was late to join the big science trend, there has nevertheless been a remarkable increase in the number, scope and complexity of research collaborations and projects involving biologists over the past two decades (Parker et al, 2010). The Human Genome Project (HGP) is arguably the most well known of these and attracted serious scientific, public and government attention to ‘big biology''. Initial exchanges were polarized and often polemic, as proponents of the HGP applauded the advent of big biology and argued that it would produce results unattainable through other means (Hood, 1990). Critics highlighted the negative consequences of massive-scale research, including the industrialization, bureaucratization and politicization of research (Rechsteiner, 1990). They also suggested that it was not suited to generating knowledge at all; Nobel laureate Sydney Brenner joked that sequencing was so boring it should be done by prisoners: “the more heinous the crime, the bigger the chromosome they would have to decipher” (Roberts, 2001).A recent Opinion in EMBO reports summarized the arguments against “the creeping hegemony” of ‘big science'' over ‘little science'' in biomedical research. First, many large research projects are of questionable scientific and practical value. Second, big science transfers the control of research topics and goals to bureaucrats, when decisions about research should be primarily driven by the scientific community (Petsko, 2009). Gregory Petsko makes a valid point in his Opinion about wasteful research projects and raises the important question of how research goals should be set and by whom. Here, we contextualize Petsko''s arguments by drawing on the history and sociology of science to expound the drawbacks and benefits of big science. We then advance an alternative to the current antipodes of ‘big'' and ‘little'' biology, which offers some of the benefits and avoids some of the adverse consequences.Big science is not a recent development. Among the first large, collaborative research projects were the Manhattan Project to develop the atomic bomb, and efforts to decipher German codes during the Second World War. The concept itself was put forward in 1961 by physicist Alvin Weinberg, and further developed by historian of science Derek De Solla Price in his pioneering book, Little Science, Big Science. “The large-scale character of modern science, new and shining and all powerful, is so apparent that the happy term ‘Big Science'' has been coined to describe it” (De Solla Price, 1963). Weinberg noted that science had become ‘big'' in two ways. First, through the development of elaborate research instrumentation, the use of which requires large research teams, and second, through the explosive growth of scientific research in general. More recently, big science has come to refer to a diverse but strongly related set of changes in the organization of scientific research. This includes expensive equipment and large research teams, but also the increasing industrialization of research activities, the escalating frequency of interdisciplinary and international collaborations, and the increasing manpower needed to achieve research goals (Galison & Hevly, 1992). Many areas of biological research have shifted in these directions in recent years and have radically altered the methods by which biologists generate scientific knowledge.Despite this long history of collaboration, laboratory biology remained ‘small-scale'' until the rising prominence of molecular biology changed the research landscapeUnderstanding the implications of this change begins with an appreciation of the history of collaborations in the life sciences—biology has long been a collaborative effort. Natural scientists accompanied the great explorers in the grand alliance between science and exploration during the sixteenth and seventeenth centuries (Capshew & Rader, 1992), which not only served to map uncharted territories, but also contributed enormously to knowledge of the fauna and flora discovered. These early expeditions gradually evolved into coordinated, multidisciplinary research programmes, which began with the International Polar Years, intended to concentrate international research efforts at the North and South Poles (1882–1883; 1932–1933). The Polar Years became exemplars of large-scale life science collaboration, begetting the International Geophysical Year (1957–1958) and the International Biological Programme (1968–1974).For Weinberg, the potentially negative consequences associated with big science were “adminstratitis, moneyitis, and journalitis”…Despite this long history of collaboration, laboratory biology remained ‘small-scale'' until the rising prominence of molecular biology changed the research landscape. During the late 1950s and early 1960s, many research organizations encouraged international collaboration in the life sciences, spurring the creation of, among other things, the European Molecular Biology Organization (1964) and the European Molecular Biology Laboratory (1974). In addition, international mapping and sequencing projects were developed around model organisms such as Drosophila and Caenorhabditis elegans, and scientists formed research networks, exchanged research materials and information, and divided labour across laboratories. These new ways of working set the stage for the HGP, which is widely acknowledged as the cornerstone of the current ‘post-genomics era''. As an editorial on ‘post-genomics cultures'' put it in the journal Nature, “Like it or not, big biology is here to stay” (Anon, 2001).Just as big science is not new, neither are concerns about its consequences. As early as 1948, the sociologist Max Weber worried that as equipment was becoming more expensive, scientists were losing autonomy and becoming more dependent on external funding (Weber, 1948). Similarly, although Weinberg and De Solla Price expressed wonder at the scope of the changes they were witnessing, they too offered critical evaluations. For Weinberg, the potentially negative consequences associated with big science were “adminstratitis, moneyitis, and journalitis”; meaning the dominance of science administrators over practitioners, the tendency to view funding increases as a panacea for solving scientific problems, and progressively blurry lines between scientific and popular writing in order to woo public support for big research projects (Weinberg, 1961). De Solla Price worried that the bureaucracy associated with big science would fail to entice the intellectual mavericks on which science depends (De Solla Price, 1963). These concerns remain valid and have been voiced time and again.As big science represents a major investment of time, money and manpower, it tends to determine and channel research in particular directions that afford certain possibilities and preclude others (Cook & Brown, 1999). In the worst case, this can result in entire scientific communities following false leads, as was the case in the 1940s and 1950s for Soviet agronomy. Huge investments were made to demonstrate the superiority of Lamarckian over Mendelian theories of heritability, which held back Russian biology for decades (Soyfer, 1994). Such worst-case scenarios are, however, rare. A more likely consequence is that big science can diminish the diversity of research approaches. For instance, plasma fusion scientists are now under pressure to design projects that are relevant to the large-scale International Thermonuclear Experimental Reactor, despite the potential benefits of a wide array of smaller-scale machines and approaches (Hackett et al, 2004). Big science projects can also involve coordination challenges, take substantial time to realize success, and be difficult to evaluate (Neal et al, 2008).Importantly, big science projects allow for the coordination and activation of diverse forms of expertise across disciplinary, national and professional boundariesAnother danger of big science is that researchers will lose the intrinsic satisfaction that arises from having personal control over their work. Dissatisfaction could lower research productivity (Babu & Singh, 1998) and might create the concomitant danger of losing talented young researchers to other, more engaging callings. Moreover, the alienation of scientists from their work as a result of big science enterprises can lead to a loss of personal responsibility for research. In turn, this can increase the likelihood of misconduct, as effective social control is eroded and “the satisfactions of science are overshadowed by organizational demands, economic calculations, and career strategies” (Hackett, 1994).Practicing scientists are aware of these risks. Yet, they remain engaged in large-scale projects because they must, but also because of the real benefits these projects offer. Importantly, big science projects allow for the coordination and activation of diverse forms of expertise across disciplinary, national and professional boundaries to solve otherwise intractable basic and applied problems. Although calling for international and interdisciplinary collaboration is popular, practicing it is notably less popular and much harder (Weingart, 2000). Big science projects can act as a focal point that allows researchers from diverse backgrounds to cooperate, and simultaneously advances different scientific specialties while forging interstitial connections among them. Another major benefit of big science is that it facilitates the development of common research standards and metrics, allowing for the rapid development of nascent research frontiers (Fujimura, 1996). Furthermore, the high profile of big science efforts such as the HGP and CERN draw public attention to science, potentially enhancing scientific literacy and the public''s willingness to support research.Rather than arguing for or against big science, molecular biology would best benefit from strategic investments in a diverse portfolio of big, little and ‘mezzo'' research projectsBig science can also ease some of the problems associated with scientific management. In terms of training, graduate students and junior researchers involved in big science projects can gain additional skills in problem-solving, communication and team working (Court & Morris, 1994). The bureaucratic structure and well-defined roles of big science projects also make leadership transitions and researcher attrition easier to manage compared with the informal, refractory organization of most small research projects. Big science projects also provide a visible platform for resource acquisition and the recruitment of new scientific talent. Moreover, through their sheer size, diversity and complexity, they can also increase the frequency of serendipitous social interactions and scientific discoveries (Hackett et al, 2008). Finally, large-scale research projects can influence scientific and public policy. Big science creates organizational structures in which many scientists share responsibility for, and expectations of, a scientific problem (Van Lente, 1993). This shared ownership and these shared futures help coordinate communication and enable researchers to present a united front when advancing the potential benefits of their projects to funding bodies.Given these benefits and pitfalls of big science, how might molecular biology best proceed? Petsko''s response is that, “[s]cientific priorities must, for the most part, be set by the free exchange of ideas in the scientific literature, at meetings and in review panels. They must be set from the bottom up, from the community of scientists, not by the people who control the purse strings.” It is certainly the case, as Petsko also acknowledges, that science has benefited from a combination of generous public support and professional autonomy. However, we are less sanguine about his belief that the scientific community alone has the capacity to ascertain the practical value of particular lines of inquiry, determine the most appropriate scale of research, and bring them to fruition. In fact, current mismatches between the production of scientific knowledge and the information needs of public policy-makers strongly suggest that the opposite is true (Sarewitz & Pielke, 2007).Instead, we maintain that these types of decision should be determined through collective decision-making that involves researchers, governmental funding agencies, science policy experts and the public. In fact, the highly successful HGP involved such collaborations (Lambright, 2002). Taking into account the opinions and attitudes of these stakeholders better links knowledge production to the public good (Cash et al, 2003)—a major justification for supporting big biology. We do agree with Petsko, however, that large-scale projects can develop pathological characteristics, and that all programmes should therefore undergo regular assessments to determine their continuing worth.Rather than arguing for or against big science, molecular biology would best benefit from strategic investments in a diverse portfolio of big, little and ‘mezzo'' research projects. Their size, duration and organizational structure should be determined by the research question, subject matter and intended goals (Westfall, 2003). Parties involved in making these decisions should, in turn, aim at striking a profitable balance between differently sized research projects to garner the benefits of each and allow practitioners the autonomy to choose among them.This will require new, innovative methods for supporting and coordinating research. An important first step is ensuring that funding is made available for all kinds of research at a range of scales. For this to happen, the current funding model needs to be modified. The practice of allocating separate funds for individual investigator-driven and collective research projects is a positive step in the right direction, but it does not discriminate between projects of different sizes at a sufficiently fine resolution. Instead, multiple funding pools should be made available for projects of different sizes and scales, allowing for greater accuracy in project planning, funding and evaluation.It is up to scientists and policymakers to discern how to benefit from the advantages that ‘bigness'' has to offer, while avoiding the pitfalls inherent in doing soSecond, science policy should consciously facilitate the ‘scaling up'', ‘scaling down'' and concatenation of research projects when needed. For instance, special funds might be established for supporting small-scale but potentially transformative research with the capacity to be scaled up in the future. Alternatively, small-scale satellite research projects that are more nimble, exploratory and risky, could complement big science initiatives or be generated by them. This is also in line with Petsko''s statement that “the best kind of big science is the kind that supports and generates lots of good little science.” Another potentially fruitful strategy we suggest would be to fund independent, small-scale research projects to work on co-relevant research with the later objective of consolidating them into a single project in a kind of building-block assembly. By using these and other mechanisms for organizing research at different scales, it could help to ameliorate some of the problems associated with big science, while also accruing its most important benefits.Within the life sciences, the field of ecology perhaps best exemplifies this strategy. Although it encompasses many small-scale laboratory and field studies, ecologists now collaborate in a variety of novel organizations that blend elements of big, little and mezzo science and that are designed to catalyse different forms of research. For example, the US National Center for Ecological Analysis and Synthesis brings together researchers and data from many smaller projects to synthesize their findings. The Long Term Ecological Research Network consists of dozens of mezzo-scale collaborations focused on specific sites, but also leverages big science through cross-site collaborations. While investments are made in classical big science projects, such as the National Ecological Observatory Network, no one project or approach has dominated—nor should it. In these ways, ecologists have been able to reap the benefits of big science whilst maintaining diverse research approaches and individual autonomy and still being able to enjoy the intrinsic satisfaction associated with scientific work.Big biology is here to stay and is neither a curse nor a blessing. It is up to scientists and policy-makers to discern how to benefit from the advantages that ‘bigness'' has to offer, while avoiding the pitfalls inherent in so doing. The challenge confronting molecular biology in the coming years is to decide which kind of research projects are best suited to getting the job done. Molecular biology itself arose, in part, from the migration of physicists to biology; as physics research projects and collaborations grew and became more dependent on expensive equipment, appreciating the saliency of one''s own work became increasingly difficult, which led some to seek refuge in the comparatively little science of biology (Dev, 1990). The current situation, which Petsko criticizes in his Opinion article, is thus the result of an organizational and intellectual cycle that began more than six decades ago. It would certainly behoove molecular biologists to heed his warnings and consider the best paths forward.? Open in a separate windowNiki VermeulenOpen in a separate windowJohn N. ParkerOpen in a separate windowBart Penders  相似文献   

13.
Rinaldi A 《EMBO reports》2012,13(4):303-307
Scientists and journalists try to engage the public with exciting stories, but who is guilty of overselling research and what are the consequences?Scientists love to hate the media for distorting science or getting the facts wrong. Even as they do so, they court publicity for their latest findings, which can bring a slew of media attention and public interest. Getting your research into the national press can result in great boons in terms of political and financial support. Conversely, when scientific discoveries turn out to be wrong, or to have been hyped, the negative press can have a damaging effect on careers and, perhaps more importantly, the image of science itself. Walking the line between ‘selling'' a story and ‘hyping'' it far beyond the evidence is no easy task. Professional science communicators work carefully with scientists and journalists to ensure that the messages from research are translated for the public accurately and appropriately. But when things do go wrong, is it always the fault of journalists, or are scientists and those they employ to communicate sometimes equally to blame?Walking the line between ‘selling'' a story and ‘hyping'' it far beyond the evidence is no easy taskHyping in science has existed since the dawn of research itself. When scientists relied on the money of wealthy benefactors with little expertise to fund their research, the temptation to claim that they could turn lead into gold, or that they could discover the secret of eternal life, must have been huge. In the modern era, hyping of research tends to make less exuberant claims, but it is no less damaging and no less deceitful, even if sometimes unintentionally so. A few recent cases have brought this problem to the surface again.The most frenzied of these was the report in Science last year that a newly isolated bacterial strain could replace phosphate with arsenate in cellular constituents such as nucleic acids and proteins [1]. The study, led by NASA astrobiologist Felisa Wolfe-Simon, showed that a new strain of the Halomonadaceae family of halofilic proteobacteria, isolated from the alkaline and hypersaline Mono Lake in California (Fig 1), could not only survive in arsenic-rich conditions, such as those found in its original environment, but even thrive by using arsenic entirely in place of phosphorus. “The definition of life has just expanded. As we pursue our efforts to seek signs of life in the solar system, we have to think more broadly, more diversely and consider life as we do not know it,” commented Ed Weiler, NASA''s associate administrator for the Science Mission Directorate at the agency''s Headquarters in Washington, in the original press release [2].Open in a separate windowFigure 1Sunrise at Mono Lake. Mono Lake, located in eastern California, is bounded to the west by the Sierra Nevada mountains. This ancient alkaline lake is known for unusual tufa (limestone) formations rising from the water''s surface (shown here), as well as for its hypersalinity and high concentrations of arsenic. See Wolfe-Simon et al [1]. Credit: Henry Bortman.The accompanying “search for life beyond Earth” and “alternative biochemistry makeup” hints contained in the same release were lapped up by the media, which covered the breakthrough with headlines such as “Arsenic-loving bacteria may help in hunt for alien life” (BBC News), “Arsenic-based bacteria point to new life forms” (New Scientist), “Arsenic-feeding bacteria find expands traditional notions of life” (CNN). However, it did not take long for criticism to manifest, with many scientists openly questioning whether background levels of phosphorus could have fuelled the bacteria''s growth in the cultures, whether arsenate compounds are even stable in aqueous solution, and whether the tests the authors used to prove that arsenic atoms were replacing phosphorus ones in key biomolecules were accurate. The backlash was so bitter that Science published the concerns of several research groups commenting on the technical shortcomings of the study and went so far as to change its original press release for reporters, adding a warning note that reads “Clarification: this paper describes a bacterium that substitutes arsenic for a small percentage of its phosphorus, rather than living entirely off arsenic.”Microbiologists Simon Silver and Le T. Phung, from the University of Illinois, Chicago, USA, were heavily critical of the study, voicing their concern in one of the journals of the Federation of European Microbiological Societies, FEMS Microbiology Letters. “The recent online report in Science […] either (1) wonderfully expands our imaginations as to how living cells might function […] or (2) is just the newest example of how scientist-authors can walk off the plank in their imaginations when interpreting their results, how peer reviewers (if there were any) simply missed their responsibilities and how a press release from the publisher of Science can result in irresponsible publicity in the New York Times and on television. We suggest the latter alternative is the case, and that this report should have been stopped at each of several stages” [3]. Meanwhile, Wolfe-Simon is looking for another chance to prove she was right about the arsenic-loving bug, and Silver and colleagues have completed the bacterium''s genome shotgun sequencing and found 3,400 genes in its 3.5 million bases (www.ncbi.nlm.nih.gov/Traces/wgs/?val=AHBC01).“I can only comment that it would probably be best if one had avoided a flurry of press conferences and speculative extrapolations. The discovery, if true, would be similarly impressive without any hype in the press releases,” commented John Ioannidis, Professor of Medicine at Stanford University School of Medicine in the USA. “I also think that this is the kind of discovery that can definitely wait for a validation by several independent teams before stirring the world. It is not the type of research finding that one cannot wait to trumpet as if thousands and millions of people were to die if they did not know about it,” he explained. “If validated, it may be material for a Nobel prize, but if not, then the claims would backfire on the credibility of science in the public view.”Another instructive example of science hyping was sparked by a recent report of fossil teeth, dating to between 200,000 and 400,000 years ago, which were unearthed in the Qesem Cave near Tel Aviv by Israeli and Spanish scientists [4]. Although the teeth cannot yet be conclusively ascribed to Homo sapiens, Homo neanderthalensis, or any other species of hominid, the media coverage and the original press release from Tel Aviv University stretched the relevance of the story—and the evidence—proclaiming that the finding demonstrates humans lived in Israel 400,000 years ago, which should force scientists to rewrite human history. Were such evidence of modern humans in the Middle East so long ago confirmed, it would indeed clash with the prevailing view of human origin in Africa some 200,000 years ago and the dispersal from the cradle continent that began about 70,000 years ago. But, as freelance science writer Brian Switek has pointed out, “The identity of the Qesem Cave humans cannot be conclusively determined. All the grandiose statements about their relevance to the origin of our species reach beyond what the actual fossil material will allow” [5].An example of sensationalist coverage? “It has long been believed that modern man emerged from the continent of Africa 200,000 years ago. Now Tel Aviv University archaeologists have uncovered evidence that Homo sapiens roamed the land now called Israel as early as 400,000 years ago—the earliest evidence for the existence of modern man anywhere in the world,” reads a press release from the New York-based organization, American Friends of Tel Aviv University [6].“The extent of hype depends on how people interpret facts and evidence, and their intent in the claims they are making. Hype in science can range from ‘no hype'', where predictions of scientific futures are 100% fact based, to complete exaggeration based on no facts or evidence,” commented Zubin Master, a researcher in science ethics at the University of Alberta in Edmonton, Canada. “Intention also plays a role in hype and the prediction of scientific futures, as making extravagant claims, for example in an attempt to secure funds, could be tantamount to lying.”Are scientists more and more often indulging in creative speculation when interpreting their results, just to get extraordinary media coverage of their discoveries? Is science journalism progressively shifting towards hyping stories to attract readers?“The vast majority of scientific work can wait for some independent validation before its importance is trumpeted to the wider public. Over-interpretation of results is common and as scientists we are continuously under pressure to show that we make big discoveries,” commented Ioannidis. “However, probably our role [as scientists] is more important in making sure that we provide balanced views of evidence and in identifying how we can question more rigorously the validity of our own discoveries.”“The vast majority of scientific work can wait for some independent validation before its importance is trumpeted to the wider public”Stephanie Suhr, who is involved in the management of the European XFEL—a facility being built in Germany to generate intense X-ray flashes for use in many disciplines—notes in her introduction to a series of essays on the ethics of science journalism that, “Arguably, there may also be an increasing temptation for scientists to hype their research and ‘hit the headlines''” [7]. In her analysis, Suhr quotes at least one instance—the discovery in 2009 of the Darwinius masillae fossil, presented as the missing link in human evolution [8]—in which the release of a ‘breakthrough'' scientific publication seems to have been coordinated with simultaneous documentaries and press releases, resulting in what can be considered a study case for science hyping [7].Although there is nothing wrong in principle with a broad communication strategy aimed at the rapid dissemination of a scientific discovery, some caveats exist. “[This] strategy […] might be better applied to a scientific subject or body of research. When applied to a single study, there [is] a far greater likelihood of engaging in unmerited hype with the risk of diminishing public trust or at least numbing the audience to claims of ‘startling new discoveries'',” wrote science communication expert Matthew Nisbet in his Age of Engagement blog (bigthink.com/blogs/age-of-engagement) about how media communication was managed in the Darwinius affair. “[A]ctivating the various channels and audiences was the right strategy but the language and metaphor used strayed into the realm of hype,” Nisbet, who is an Associate Professor in the School of Communication at American University, Washington DC, USA, commented in his post [9]. “We are ethically bound to think carefully about how to go beyond the very small audience that follows traditional science coverage and think systematically about how to reach a wider, more diverse audience via multiple media platforms. But in engaging with these new media platforms and audiences, we are also ethically bound to avoid hype and maintain accuracy and context” [9].But the blame for science hype cannot be laid solely at the feet of scientists and press officers. Journalists must take their fair share of reproach. “As news online comes faster and faster, there is an enormous temptation for media outlets and journalists to quickly publish topics that will grab the readers'' attention, sometimes at the cost of accuracy,” Suhr wrote [7]. Of course, the media landscape is extremely varied, as science blogger and writer Bora Zivkovic pointed out. “There is no unified thing called ‘Media''. There are wonderful specialized science writers out there, and there are beat reporters who occasionally get assigned a science story as one of several they have to file every day,” he explained. “There are careful reporters, and there are those who tend to hype. There are media outlets that value accuracy above everything else; others that put beauty of language above all else; and there are outlets that value speed, sexy headlines and ad revenue above all.”…the blame for science hype cannot be laid solely at the feet of scientists and press officers. Journalists must take their fair share of reproachOne notable example of media-sourced hype comes from J. Craig Venter''s announcement in the spring of 2010 of the first self-replicating bacterial cell controlled by a synthetic genome (Fig 2). A major media buzz ensued, over-emphasizing and somewhat distorting an anyway remarkable scientific achievement. Press coverage ranged from the extremes of announcing ‘artificial life'' to saying that Venter was playing God, adding to cultural and bioethical tension the warning that synthetic organisms could be turned into biological weapons or cause environmental disasters.Open in a separate windowFigure 2Schematic depicting the assembly of a synthetic Mycoplasma mycoides genome in yeast. For details of the construction of the genome, please see the original article. From Gibson et al [13] Science 329, 52–56. Reprinted with permission from AAAS.“The notion that scientists might some day create life is a fraught meme in Western culture. One mustn''t mess with such things, we are told, because the creation of life is the province of gods, monsters, and practitioners of the dark arts. Thus, any hint that science may be on the verge of putting the power of creation into the hands of mere mortals elicits a certain discomfort, even if the hint amounts to no more than distorted gossip,” remarked Rob Carlson, who writes on the future role of biology as a human technology, about the public reaction and the media frenzy that arose from the news [10].Yet the media can also behave responsibly when faced with extravagant claims in press releases. Fiona Fox, Chief Executive of the Science Media Centre in the UK, details such an example in her blog, On Science and the Media (fionafox.blogspot.com). The Science Media Centre''s role is to facilitate communication between scientists and the press, so they often receive calls from journalists asking to be put in touch with an expert. In this case, the journalist asked for an expert to comment on a story about silver being more effective against cancer than chemotherapy. A wild claim; yet, as Fox points out in her blog, the hype came directly from the institution''s press office: “Under the heading ‘A silver bullet to beat cancer?'' the top line of the press release stated that ‘Lab tests have shown that it (silver) is as effective as the leading chemotherapy drug—and may have far fewer side effects.'' Far from including any caveats or cautionary notes up front, the press office even included an introductory note claiming that the study ‘has confirmed the quack claim that silver has cancer-killing properties''” [11]. Fox praises the majority of the UK national press that concluded that this was not a big story to cover, pointing out that, “We''ve now got to the stage where not only do the best science journalists have to fight the perverse news values of their news editors but also to try to read between the lines of overhyped press releases to get to the truth of what a scientific study is really claiming.”…the concern is that hype inflates public expectations, resulting in a loss of trust in a given technology or research avenue if promises are not kept; however, the premise is not fully provenYet, is hype detrimental to science? In many instances, the concern is that hype inflates public expectations, resulting in a loss of trust in a given technology or research avenue if promises are not kept; however, the premise is not fully proven (Sidebar A). “There is no empirical evidence to suggest that unmet promises due to hype in biotechnology, and possibly other scientific fields, will lead to a loss of public trust and, potentially, a loss of public support for science. Thus, arguments made on hype and public trust must be nuanced to reflect this understanding,” Master pointed out.

Sidebar A | Up and down the hype cycle

AlthoughAlthough hype is usually considered a negative and largely unwanted aspect of scientific and technological communication, it cannot be denied that emphasizing, at least initially, the benefits of a given technology can further its development and use. From this point of view, hype can be seen as a normal stage of technological development, within certain limits. The maturity, adoption and application of specific technologies apparently follow a common trend pattern, described by the information technology company, Gartner, Inc., as the ‘hype cycle''. The idea is based on the observation that, after an initial trigger phase, novel technologies pass through a peak of over-excitement (or hype), often followed by a subsequent general disenchantment, before eventually coming under the spotlight again and reaching a stable plateau of productivity. Thus, hype cycles “[h]ighlight overhyped areas against those that are high impact, estimate how long technologies and trends will take to reach maturity, and help organizations decide when to adopt” (www.gartner.com).“Science is a human endeavour and as such it is inevitably shaped by our subjective responses. Scientists are not immune to these same reactions and it might be valuable to evaluate the visibility of different scientific concepts or technologies using the hype cycle,” commented Pedro Beltrao, a cellular biologist at the University of California San Francisco, USA, who runs the Public Rambling blog (pbeltrao.blogspot.com) about bioinformatics science and technology. The exercise of placing technologies in the context of the hype cycle can help us to distinguish between their real productive value and our subjective level of excitement, Beltrao explained. “As an example, I have tried to place a few concepts and technologies related to systems biology along the cycle''s axis of visibility and maturity [see illustration]. Using this, one could suggest that technologies like gene-expression arrays or mass-spectrometry have reached a stable productivity level, while the potential of concepts like personalized medicine or genome-wide association studies (GWAS) might be currently over-valued.”Together with bioethicist colleague David Resnik, Master has recently highlighted the need for empirical research that examines the relationships between hype, public trust, and public enthusiasm and/or support [12]. Their argument proposes that studies on the effect of hype on public trust can be undertaken by using both quantitative and qualitative methods: “Research can be designed to measure hype through a variety of sources including websites, blogs, movies, billboards, magazines, scientific publications, and press releases,” the authors write. “Semi-structured interviews with several specific stakeholders including genetics researchers, media representatives, patient advocates, other academic researchers (that is, ethicists, lawyers, and social scientists), physicians, ethics review board members, patients with genetic diseases, government spokespersons, and politicians could be performed. Also, members of the general public would be interviewed” [12]. They also point out that such an approach to estimate hype and its effect on public enthusiasm and support should carefully define the public under study, as different publics might have different expectations of scientific research, and will therefore have different baseline levels of trust.Increased awareness of the underlying risks of over-hyping research should help to balance the scientific facts with speculation on the enticing truths and possibilities they revealUltimately, exaggerating, hyping or outright lying is rarely a good thing. Hyping science is detrimental to various degrees to all science communication stakeholders—scientists, institutions, journalists, writers, newspapers and the public. It is important that scientists take responsibility for their share of the hyping done and do not automatically blame the media for making things up or getting things wrong. Such discipline in science communication is increasingly important as science searches for answers to the challenges of this century. Increased awareness of the underlying risks of over-hyping research should help to balance the scientific facts with speculation on the enticing truths and possibilities they reveal. The real challenge lies in favouring such an evolved approach to science communication in the face of a rolling 24-hour news cycle, tight science budgets and the uncontrolled and uncontrollable world of the Internet.? Open in a separate windowThe hype cycle for the life sciences. Pedro Beltrao''s view of the excitement–disappointment–maturation cycle of bioscience-related technologies and/or ideas. GWAS: genome-wide association studies. Credit: Pedro Beltrao.  相似文献   

14.
How does the womb determine the future? Scientists have begun to uncover how environmental and maternal factors influence our long-term health prospects.About two decades ago, David Barker, Professor of Clinical Epidemiology at the University of Southampton, UK, proposed a hypothesis that malnutrition during pregnancy and resultant low birth weight increase the risk of developing cardiovascular disease in adulthood. “The womb may be more important than the home,” remarked Barker in a note about his theory (Barker, 1990). “The old model of adult degenerative disease was based on the interaction between genes and an adverse environment in adult life. The new model that is developing will include programming by the environment in fetal and infant life.”This new idea about the influence of the environment during prenatal development on adult disease risk comes with a better understanding of epigenetic processes…The ‘Barker theory'' has been increasingly accepted and been expanded to other diseases, prominently diabetes and obesity, but also osteoporosis and allergies. “In the last few years, the evidence [of an extended] range of potential disease phenotypes with a prenatal developmental component to risk […] has become much stronger,” said Peter Gluckman at the University of Auckland, New Zealand. “We also need to give greater attention to the growing evidence of prenatal and early postnatal effects on cognitive and non-cognitive functional development and to variation in life history patterns.” Similarly, Michael Symonds and colleagues from the University Hospital at Nottingham, UK, wrote: “These critical periods occur at times when fetal development is plastic; in other words, when the fetus is experiencing rapid cell proliferation making it sensitive to environmental challenges” (Symonds et al, 2009).This new idea about the influence of the environment during prenatal development on adult disease risk comes with a better understanding of epigenetic processes—the biological mechanisms that explain how in utero experiences could translate into phenotypic variation and disease susceptibility within, or over several, generations (Gluckman et al, 2009; Fig 1). “I think it has been the combination of good empirical data (experimental and clinical), the appearance of epigenetic data to provide molecular mechanisms and a sound theoretical framework (based on evolutionary biology) that has allowed this field to mature,” said Gluckman. “Having said that, I think it is only as more human molecular data (epigenetic) emerges that this will happen.”Open in a separate windowFigure 1Environmental sensitivity of the epigenome throughout life. Adapted from Gluckman et al (2009), with permission.Epidemiological data in support of the Barker theory have come from investigations of the effects of the ‘Dutch famine''. Between November 1944 and May 1945, the western part of The Netherlands suffered a severe food shortage, owing to the ravages of the Second World War. In large cities such as Utrecht, Amsterdam, Rotterdam and The Hague, the average individual daily rations were as low as 400–800 kcal. In 1994, a large study involving hundreds of people born between November 1943 and February 1947 in a major hospital in Amsterdam was initiated to assess whether and to what extent the famine had prenatally affected the health of the subjects in later life. The Dutch Famine Birth Cohort Study (www.hongerwinter.nl) found a strong link between malnutrition and under-nutrition in utero and cardiovascular disease and diabetes in later life, as well as increased susceptibility to pulmonary diseases, altered coagulation, higher incidence of breast cancer and other diseases, although some of these links were only found in a few cases.More recently, a group led by Bastiaan Heijmans at the Leiden University Medical Centre in The Netherlands and Columbia University (New York, USA) conducted epigenetic studies of individuals who had been exposed to the Dutch famine during gestation. They analysed the level of DNA methylation at several candidate loci in the cohort and found decreased methylation of the imprinted insulin-like growth factor 2 (IGF2) gene—a key factor in human growth and development—compared with the unexposed, same-sex siblings of the cohort (Heijmans et al, 2008). Further studies have identified another six genes implicated in growth, metabolic and cardiovascular phenotypes that show altered methylation statuses associated with prenatal exposure to famine (Heijmans et al, 2009). The overall conclusion from this work is that exposure to certain conditions in the womb can lead to epigenetic marks that can persist throughout life. “It is remarkable to realize that history can leave an imprint on our DNA that is visible up to six decades later. The current challenge is to scale up such studies to the genome,” said Heijmans. His team is now using high-throughput sequencing to see whether there are genomic regions that are more susceptible to prenatal environmental influences. “Genome-scale data may also allow us to observe the hypothesized accumulation of epigenetic changes in specific biological processes, perhaps as a sign of adaptive responses,” he said.Epigenetic modification of genes involved in key regulatory pathways is central to the mechanisms of nutritional programming of disease, but other factors also seem to have a role including altered cell number or cell type, precocious activation of the hypothalamic–pituitary–adrenal axis, increased local glucocorticoid and endocrine sensitivity, impaired mitochondrial function and reduced oxidative capacity. “The particular type of mechanism invoked seems to vary between tissues according to the duration and timing of the nutritional intervention through pregnancy and/or lactation,” commented Symonds et al (2009).“If we just focus on metabolic, cardiovascular and body compositional outcome, I think the data is now overwhelming that there is an important life-long early developmental contribution. The emergent data would suggest that the underpinning epigenetic processes are likely to be at least as important as genetic variation in contributing to disease risk,” commented Gluckman. His research in animal models has shown that epigenetic changes are potentially reversible in mammals through intervention during development, when the growing organism still has sufficient plasticity (Gluckman et al, 2007). For instance, the neonatal administration of leptin has a bidirectional effect on gene expression and the epigenetic status of key genes involved in metabolic regulation in adult rats; an effect that is dependent on prenatal nutrition and unaffected by post-weaning nutrition (normal compared with high-fat diet). In rats that were manipulated in utero by maternal under-nutrition and fed a hypercaloric diet after weaning, leptin treatment normalized adiposity and hepatic gene expression of proteins that are central to lipid metabolism and glucose homeostasis. “The experimental data showing that programming is reversible is a critical proof of concept. I think there is still confusion as to the role of catch-up growth—its effect may be dependent on its timing and this may have implications for infant nutrition,” Gluckman said.The Dutch Famine Birth Cohort Study […] found a strong link between malnutrition and under-nutrition in utero and cardiovascular disease and diabetes in later life…Central to this view of the link between the developing fetus and its later risk of metabolic disease is the idea of ‘developmental mismatch''. The fetus is programmed, largely through epigenetic changes, to match its environment. However, if the environment in childhood and adult life differs sharply from that during prenatal and early postnatal development, ill adaptation can occur and bring disease in its wake (Gluckman & Hanson, 2006). Poor nutrition during fetal development, for example, would lead the organism to expect a hostile future environment, adversely affecting its ability to cope with a richer environment. “Developmental factors do not cause disease in this context, rather they create a situation where the individual becomes more (or less) sensitive in an obesogenic postnatal environment,” said Gluckman. “The experimental and early clinical data point to both central and peripheral effects and this may explain why lifestyle intervention is so hard in some individuals.”Yet there is another nutrition-related pathway that goes beyond mismatch. According to a recent, large population-based study published in The Lancet, maternal weight gain during pregnancy increases birth weight independently of genetic factors, which increases the long-term risk of obesity-related disease in offspring (Ludwig & Currie, 2010). To reduce or eliminate potential confounds such as genetics, sociodemographic factors or other individual characteristics, the researchers examined the association between maternal weight gain—as a measure of over-nutrition during pregnancy—and birth weight using State-based birth registry data in Michigan and New Jersey, allowing them to compare outcomes from several pregnancies in the same mother. “During pregnancy, insulin resistance develops in the mother to shunt vital nutrients to the growing fetus. Excessive weight or weight gain during pregnancy exaggerates this normal process by further increasing insulin resistance and possibly also by affecting other maternal hormones that regulate placental nutrient transporters. The resulting high rate of nutrient transfer stimulates fetal insulin secretion, overgrowth, and increased adiposity,” the authors speculated (Ludwig & Currie, 2010).It could be that epigenetic malprogramming is also involved in these cases. The group of Andreas Plagemann at the Charitè–University Medicine in Berlin, Germany, analysed acquired alterations of DNA methylation patterns of the hypothalamic insulin receptor promoter (IRP) in neonatally overfed rats. They found that altered nutrition during the critical developmental period of perinatal life induced IRP hypermethylation in a seemingly glucose-dependent manner. This revealed an epigenetic mechanism that could affect the function of a promoter that codes for a receptor involved in the life-long regulation of food intake, body weight and metabolism (Plagemann et al, 2010). “In parallel with the general ‘diabesity'' epidemics, diabetes during pregnancy and overweight in pregnant women meanwhile reach dramatic prevalences. Consequently, mean birth weight and frequencies of ‘fat babies'' rise,” said Plagemann. “Taking together epidemiological, clinical and experimental observations, it seems obvious that fetal hyperinsulinism induced by maternal hyperglycaemia/overweight has ‘functionally teratogenic'' significance for a permanently increased disposition to obesity, diabetes, the metabolic syndrome, and subsequent cardiovascular diseases in the offspring” (Fig 2).Open in a separate windowFigure 2Pathogenetic framework, mechanisms and consequences of perinatal malprogramming, showing the etiological significance of perinatal overfeeding and hyperinsulinism for excess weight gain, obesity, diabetes mellitus and cardiovascular diseases in later life. Credit: Andreas Plagemann.Added to the mix is the ‘endocrine-disruptor hypothesis'', one nuance of which proposes that prenatal—as well as postnatal—exposure to environmental chemicals contributes to adipogenesis and the development of obesity by interfering with homeostatic mechanisms that control weight. Several environmental pollutants, nutritional components and pharmaceuticals have been suggested to have ‘obesogenic'' properties—the best known are tributyltin, bisphenol and phthalates (Grün & Blumberg, 2009). “While one cannot presently estimate the degree to which obesogen exposure contributes to the observed increases in obesity, the main conclusion to be drawn from research in our laboratory is that obesogens exist and that prenatal obesogen exposure can predispose an exposed individual to become fatter, later in life,” said Bruce Blumberg at the University of California at Irvine, USA, who is also credited with coining the term ‘obesogen''. “The existence of such chemicals was not even suspected as recently as seven years ago when we began this research.”Several environmental pollutants, nutritional components and pharmaceuticals have been suggested to have ‘obesogenic'' properties…It is clear that diet and exercise are important contributors to the body weight of an individual. However, weight maintenance is not as simple as balancing a ‘caloric checkbook'', or fewer people would be obese, Blumberg commented. Early nutrition and chemical exposure could alter the metabolic set-point of an individual, making their subsequent fight against weight gain more difficult. “We do not currently know how many chemicals are obesogenic or the entire spectrum of mechanisms through which obesogens act,” Blumberg said. “Our data suggest that prenatal obesogen exposure alters the fate of a type of stem cells in the body to favour the development of fat cells at the expense of other cell types (such as bone). In turn, this is likely to increase one''s weight with time.”Obesogen exposure in utero and/or during the first stages of postnatal growth could therefore predispose a child to obesity by influencing all aspects of adipose tissue growth, starting from multipotent stem cells and ending with mature adipocytes (Janesick & Blumberg, 2011). “Epigenetics may also allow us to have a clearer view of the role of xenobiotics, such as bisphenol A, where traditional teratogenetic approaches to analysis seem inappropriate,” Gluckman said. “I expect the potential for either direct or indirect epigenetic inheritance will get much focus in human studies over the next few years.”The impact of the mother''s emotional state during pregnancy on the child''s behaviour and cognitive development of the child is also fertile ground for research. “It has been known from over 50 years of research in animals that stress during pregnancy can have long-term effects on the behavioural and cognitive outcome for the offspring. Over the last ten years many studies, including our own, have shown that the same is true in humans,” said Vivette Glover, a leading expert in the field from Imperial College (London, UK). “If the mother is stressed or anxious while she is pregnant, her child is more likely to have a range of problems such as symptoms of anxiety or depression, [attention deficit hyperactivity disorder] ADHD or conduct disorder, and to be slower at learning, even after allowing for postnatal influences.” Most children are not affected, but if the anxiety level of the mother is in the top 15% of the general population, the risk of her child having these problems increases from about 5% to 10%, Glover explained.Early nutrition and chemical exposure could alter the metabolic set-point of an individual, making their subsequent fight against weight gain more difficultFocusing on the mechanisms that underlie this, Glover''s team has shown that the cognitive development of the child is slower if the fetus is exposed to higher levels of the stress hormone cortisol in the womb (Bergman et al, 2010). Cortisol in fetal circulation is a combination of that produced endogenously by the fetus and that derived from the mother, through the placenta. Glover''s hypothesis is that the placenta might have a key role as a programming vector: if the mother is stressed and more anxious, the placenta becomes a less effective barrier and allows more cortisol to pass from the mother to the fetus (O''Donnell et al, 2009). “Our most recent research has studied how these prenatal effects can be altered by the later quality of the mothering, and we have found that the effects can be exacerbated if the child is insecure and buffered if the child is securely attached to the mother. So the effects are not all over at birth. There are both prenatal and postnatal effects,” Glover said. “There are large public health implications of all this. If we, as a society, cared better for the emotional wellbeing of our pregnant women we would also improve the behavioural, emotional and cognitive outcome for the next generation,” she concluded (Sidebar A).A more integrated view of the developmental ontogeny of a human from embryo to adult is needed…

Sidebar A | Focus on fetal life to help the next generation

“The global burden of death, disability, and loss of human capital as a result of impaired fetal development is huge and affects both developed and developing countries,” concludes a recent World Health Organization technical consultation (WHO, 2006). It advocates moving away from a focus on birth weight to embrace more factors to ensure an optimal environment for the fetus, to maximize its potential for a healthy life. As our knowledge of developmental biology expands, there is progressively greater awareness that events early in human development can have effects in later stages of life, and even inter-generational consequences in terms of non-communicable diseases, such as cardiovascular disease and diabetes.Calling for a radical change in medical attitudes—which they say are responsible for not giving enough credit to “the concept that environmental factors acting early in life (usually in fetal life) have profound effects on vulnerability to disease later in life”—Peter Gluckman, Mark Hanson and Murray Mitchell have recently proposed several prevention and intervention initiatives that could reduce the burden of chronic disease in the next generation (Gluckman et al., 2010). These include limitation of adolescent pregnancy, possibly delaying the age of first pregnancy until four years after menarche; promotion of a healthy diet and lifestyle among women becoming pregnant to avoid the long-term effects of both excessive and deficient maternal nutrition, smoking, or drug and alcohol abuse; and encouraging breastfeeding for optimal growth, resistance to infection, cardiovascular health and neurocognitive development. Clearly, such actions would face a mix of educational, political and social issues, depending on the geographical or cultural area.“None of these solutions seems sophisticated, although it may have taken the recent insights into underlying developmental epigenetic mechanisms to emphasize them. But, when viewed in terms of their potential impact, especially in developing societies and in lower socioeconomic groups in developed countries, it is clear that their importance has been underestimated” (Gluckman et al., 2010).A more integrated view of the developmental ontogeny of a human from embryo to adult is needed, grounded by appreciation of the fact that the developmental trajectory of the fetus is influenced by factors such as maternal nutrition, body composition and maternal age (Fig 3). This must not be limited to the offspring of gestational diabetics and obese mothers. “While these are more extreme influences on the fetus and will lead to immediate consequences (blurring the boundary between what is physiological and pathophysiological), I think the most important observations and conceptual advances will emerge from understanding the long-term implications and underpinning mechanisms of relatively normal early development still having plastic consequences,” Gluckman said. “Thus, what seem to be unremarkable pregnancies still have important influences on the destiny of the offspring.” Though this might be easy to say, the regulatory mechanisms that underlie the complex journey of development await further clarification.Open in a separate windowFigure 3Leonardo Da Vinci: Studies of the fetus in the womb, circa 1510–1513. In Da Vinci''s words, referring to his treatise on anatomy, for which these drawings were made: “This work must begin with the conception of man, and describe the nature of the womb and how the fetus lives in it, up to what stage it resides there, and in what way it quickens into life and feeds. Also its growth and what interval there is between one stage of growth and another. What it is that forces it out from the body of the mother, and for what reasons it sometimes comes out of the mother''s womb before the due time” (Dunn, 1997).  相似文献   

15.
Humans and beetles both have a species-specific Umwelt circumscribed by their sensory equipment. However, Ladislav Kováč argues that humans, unlike beetles, have invented scientific instruments that are able to reach beyond the conceptual borders of our Umwelt.You may have seen the film Microcosmos, produced in 1996 by the French biologists Claude Nuridsany and Marie Perrenou. It does not star humans, but much smaller creatures, mostly insects. The filmmakers'' magnifying camera transposes the viewer into the world of these organisms. For me, Microcosmos is not an ordinary naturalist documentary; it is an exercise in metaphysics.One sequence in the film shows a dung beetle—with the ‘philosophical'' generic name Sisyphus—rolling a ball of horse manure twice its size that becomes stuck on a twig. As the creature struggles to free the dung, it gives the impression that it is both worried and obstinate. As we humans know, the ball represents a most valuable treasure for the beetle: it will lay its eggs into the manure that will later feed its offspring. The behaviour of the beetle is biologically meaningful; it serves its Darwinian fitness.Yet, the dung beetle knows nothing of the function of manure, nor of the horse that dropped the excrement, nor of the human who owned the horse. Sisyphus lives in a world that is circumscribed by its somatic sensors—a species-specific world that the German biologist and philosopher Jakob von Uexküll would have called the dung beetle''s ‘Umwelt''. The horse, too, has its own Umwelt, as does the human. Yet, the world of the horse, just like the world of the man, does not exist for the beetle.If a ‘scholar'' among dung beetles attempted to visualize the world ‘out there'', what would be the dung-beetles'' metaphysics—their image of a part of the world about which they have no data furnished by their sensors? What would be their religions, their truths, or the Truth—revealed, and thus indisputable?Beetles are most successful animals; one animal in every four is a beetle, leading the biologist J.B.S. Haldane to quip that the Creator must have “had an inordinate fondness for beetles”. Are we humans so different from dung beetles? By birth we are similar: inter faeces et urinas nascimur—we are born between faeces and urine—as Aurelius Augustine remarked 1,600 years ago. Humans also have a species-specific Umwelt that has been shaped by biological evolution. A richer one than is the Umwelt of beetles, as we have more sensors than have they. Relative to the body size, we also possess a much larger brain and with it the capacity to make versatile movements with our hands and to finely manipulate with our fingers.This manual dexterity has enabled humans to fabricate artefacts that are, in a sense, extensions and refinements of the human hand. The simplest one, a coarse-chipped stone, represents the evolutionary origins of artefacts. Step-by-step, by a ratchet-like process, artefacts have become ever more complicated: as an example, a Boeing 777 is assembled from more than three million parts. At each step, humans have just added a tiny improvement to the previously achieved state. Over time, the evolution of artefacts has become less dependent on human intention and may soon result in artefacts with the capacity for self-improvement and self-reproduction. In fact, it is by artefacts that humans transcend their biology; artefacts make humans different from beetles. Here is the essence of the difference: humans roll their artefactual balls, no less worried and obstinate than beetles, but, in contrast to the latter, humans often do it even if the action is biologically meaningless, at the expense of their Darwinian fitness. Humans are biologically less rational than are beetles.Artefacts have immensely enriched the human Umwelt. From among them, scientific instruments should be singled out, as they function as novel, extrasomatic sensors of the human species. They have substantially fine-grained human knowledge of the Umwelt. But they are also reaching out—both to a distance and at a rate that is exponentially increasing—behind the boundary of the human Umwelt, behind its conceptual confines that we call Kant''s barriers. Into the world that has long been a subject of human ‘dung-beetle-like'' metaphysics. Nevertheless, our theories about this world could now be substantiated by data coming from the extrasomatic sensors. These instruments, fumbling in the unknown, supply reliable and reproducible data such that their messages must be true. They supersede our arbitrary guesses and fancies, but their truth seems to be out of our conceptual grasp. Conceptually, our mind confines us to our species-specific Umwelt.We continue to share the common fate of our fellow dung beetles: There is undeniably a world outside the confinements of our species-specific Umwelt, but if the world of humans is too complex for the neural ganglia of beetles, the world beyond Kant''s barriers may similarly exceed the capacity of the human brain. The physicist Richard Feynman (1965) stated, perhaps resignedly, “I can safely say that nobody today understands quantum mechanics.” Frank Gannon (2007) likewise commented that biological research, similarly to research in quantum mechanics, might be approaching a state “too complex to comprehend”. New models of the human brain itself may turn out to be “true and effective—and beyond comprehension” (Kováč, 2009).The advances of science notwithstanding, the knowledge of the universe that we have gained on the planet Earth might yet be in its infancy. However, in contrast to the limited capacity of humans, the continuing evolution of artefacts may mean that they face no limits in their explorative potential. They might soon dispense with our conceptual assistance exploring the realms that will remain closed to the human mind forever.  相似文献   

16.
Rinaldi A 《EMBO reports》2012,13(1):24-27
Does the spin of an electron allow birds to see the Earth''s magnetic field? Andrea Rinaldi investigates the influence of quantum events in the biological world.The subatomic world is nothing like the world that biologists study. Physicists have struggled for almost a century to understand the wave–particle duality of matter and energy, but many questions remain unanswered. That biological systems ultimately obey the rules of quantum mechanics might be self-evident, but the idea that those rules are the very basis of certain biological functions has needed 80 years of thought, research and development for evidence to begin to emerge (Sidebar A).

Sidebar A | Putting things in their place

Although Erwin Schrödinger (1887–1961) is often credited as the ‘father'' of quantum biology, owing to the publication of his famous 1944 book, What is Life?, the full picture is more complex. While other researchers were already moving towards these concepts in the 1920s, the German theoretical physicist Pascual Jordan (1902–1980) was actually one of the first to attempt to reconcile biological phenomena with the quantum revolution that Jordan himself, working with Max Born and Werner Heisenberg, largely ignited. “Pascual Jordan was one of many scientists at the time who were exploring biophysics in innovative ways. In some cases, his ideas have proven to be speculative or even fantastical. In others, however, his ideas have proven to be really ahead of their time,” explained Richard Beyler, a science historian at Portland State University, USA, who analysed Jordan''s contribution to the rise of quantum biology (Beyler, 1996). “I think this applies to Jordan''s work in quantum biology as well.”Beyler also remarked that some of the well-known figures of molecular biology''s past—Max Delbrück is a notable example—entered into their studies at least in part as a response or rejoinder to Jordan''s work. “Schrödinger''s book can also be read, on some level, as an indirect response to Jordan,” Beyler said.Jordan was certainly a complex personality and his case is rendered more complicated by the fact that he explicitly hitched his already speculative scientific theories to various right-wing political philosophies. “During the Nazi regime, for example, he promoted the notion that quantum biology served as evidence for the naturalness of dictatorship and the prospective death of liberal democracy,” Beyler commented. “After 1945, Jordan became a staunch Cold Warrior and saw in quantum biology a challenge to philosophical and political materialism. Needless to say, not all of his scientific colleagues appreciated these propagandistic endeavors.”Pascual Jordan [pictured above] and the dawn of quantum biology. From 1932, Jordan started to outline the new field''s background in a series of essays that were published in journals such as Naturwissenschaften. An exposition of quantum biology is also encountered in his book Die Physik und das Geheimnis des organischen Lebens, published in 1941. Photo courtesy of Luca Turin.Until very recently, it was not even possible to investigate whether quantum phenomena such as coherence and entanglement could play a significant role in the function of living organisms. As such, researchers were largely limited to computer simulations and theoretical experiments to explain their observations (see A quantum leap in biology, www.emboreports.org). Recently, however, quantum biologists have been making inroads into developing methodology to measure the degree of quantum entanglement in light-harvesting systems. Their breakthrough has turned once ephemeral theories into solid evidence, and has sparked the beginning of an entirely new discipline.How widespread is the direct relevance of quantum effects in nature is hard to say and many scientists suspect that there are only a few cases in which quantum mechanics have a crucial role. However, interest in the field is growing and researchers are looking for more examples of quantum-dependent biological systems. In a way, quantum biology can be viewed as a natural evolution of biophysics, moving from the classical to the quantum, from the atomic to the subatomic. Yet the discipline might prove to be an even more intimate and further-reaching marriage that could provide a deeper understanding of things such as protein energetics and dynamics, and all biological processes where electrons flow.Recently […] quantum biologists have been making inroads into developing methodology to measure the degree of quantum entanglement in light-harvesting systemsAmong the biological systems in which quantum effects are believed to have a crucial role is magnetoreception, although the nature of the receptors and the underlying biophysical mechanisms remain unknown. The possibility that organisms use a ferromagnetic material (magnetite) in some cases has received some confirmation, but support is growing for the explanation lying in a chemical detection mechanism with quantum mechanical properties. This explanation posits a chemical compass based on the light-triggered production of a radical pair—a pair of molecules each with an unpaired electron—the spins of which are entangled. If the products of the radical pair system are spin-dependent, then a magnetic field—like the geomagnetic one—that affects the direction of spin will alter the reaction products. The idea is that these reaction products affect the sensitivity of light sensors in the eye, thus allowing organisms to ‘see'' magnetic fields.The research comes from a team led by Thorsten Ritz at the University of California Irvine, USA, and other groups, who have suggested that the radical pair reaction takes place in the molecule cryptochrome. Cryptochromes are flavoprotein photoreceptors first identified in the model plant Arabidopsis thaliana, in which they play key roles in growth and development. More recently, cryptochromes have been found to have a role in the circadian clock of fruit flies (Ritz et al, 2010) and are known to be present in migratory birds. Intriguingly, magnetic fields have been shown to have an effect on both Arabidopsis seedlings, which respond as though they have been exposed to higher levels of blue light, and Drosophila, in which the period length of the clock is lengthened, mimicking the effect of increased blue light signal intensity on cryptochromes (Ahmad et al, 2007; Yoshii et al, 2009).“The study of quantum effects in biological systems is a rapidly broadening field of research in which intriguing phenomena are yet to be uncovered and understood”Direct evidence that cryptochrome is the avian magnetic compass is currently lacking, but the molecule does have some features that make its candidacy possible. In a recent review (Ritz et al, 2010), Ritz and colleagues discussed the mechanism by which cryptochrome might form radical pairs. They argued that “Cryptochromes are bound to a light-absorbing flavin cofactor (FAD) which can exist in three interconvertable [sic] redox forms: (FAD, FADH, FADH),” and that the redox state of FAD is light-dependent. As such, both the oxidation and reduction of the flavin have radical species as intermediates. “Therefore both forward and reverse reactions may involve the formation of radical pairs” (Ritz et al, 2010). Although speculative, the idea is that a magnetic field could alter the spin of the free electrons in the radical pairs resulting in altered photoreceptor responses that could be perceived by the organism. “Given the relatively short time from the first suggestion of cryptochrome as a magnetoreceptor in 2000, the amount of studies from different fields supporting the photo-magnetoreceptor and cryptochrome hypotheses […] is promising,” the authors concluded. “It suggests that we may be only one step away from a true smoking gun revealing the long-sought after molecular nature of receptors underlying the 6th sense and thus the solution of a great outstanding riddle of sensory biology.”Research into quantum effects in biology took off in 2007 with groundbreaking experiments from Graham Fleming''s group at the University of California, Berkeley, USA. Fleming''s team were able to develop tools that allowed them to excite the photosynthetic apparatus of the green sulphur bacterium Chlorobium tepidum with short laser pulses to demonstrate that wave-like energy transfer takes place through quantum coherence (Engel et al, 2007). Shortly after, Martin Plenio''s group at Ulm University in Germany and Alán Aspuru-Guzik''s team at Harvard University in the USA simultaneously provided evidence that it is a subtle interplay between quantum coherence and environmental noise that optimizes the performance of biological systems such as the photosynthetic machinery, adding further interest to the field (Plenio & Huelga, 2008; Rebentrost et al, 2009). “The recent Quantum Effects in Biological Systems (QuEBS) 2011 meeting in Ulm saw an increasing number of biological systems added to the group of biological processes in which quantum effects are suspected to play a crucial role,” commented Plenio, one of the workshop organizers; he mentioned the examples of avian magnetoreception and the role of phonon-assisted tunnelling to explain the function of the sense of smell (see below). “The study of quantum effects in biological systems is a rapidly broadening field of research in which intriguing phenomena are yet to be uncovered and understood,” he concluded.“The area of quantum effects in biology is very exciting because it is pushing the limits of quantum physics to a new scale,” Yasser Omar from the Technical University of Lisbon, Portugal commented. ”[W]e are finding that quantum coherence plays a significant role in the function of systems that we previously thought would be too large, too hot—working at physiological temperatures—and too complex to depend on quantum effects.”Another growing focus of quantum biologists is the sense of smell and odorant recognition. Mainstream researchers have always favoured a ‘lock-and-key'' mechanism to explain how organisms detect and distinguish different smells. In this case, the identification of odorant molecules relies on their specific shape to activate receptors on the surface of sensory neurons in the nasal epithelium. However, a small group of ‘heretics'' think that the smell of a molecule is actually determined by intramolecular vibrations, rather than by its shape. This, they say, explains why the shape theory has so far failed to explain why different molecules can have similar odours, while similar molecules can have dissimilar odours. It also goes some way to explaining how humans can manage with fewer than 400 smell receptors.…determining whether quantum effects have a role in odorant recognition has involved assessing the physical violations of such a mechanism […] and finding that, given certain biological parameters, there are noneA recent study in Proceedings of the National Academy of Sciences USA has now provided new grist for the mill for ‘vibrationists''. Researchers from the Biomedical Sciences Research Center “Alexander Fleming”, Vari, Greece—where the experiments were performed—and the Massachusetts Institute of Technology (MIT), USA, collaborated to replace hydrogen with deuterium in odorants such as acetophenone and 1-octanol, and asked whether Drosophila flies could distinguish the two isotopes, which are identically shaped but vibrate differently (Franco et al, 2011). Not only were the flies able to discriminate between the isotopic odorants, but when trained to discriminate against the normal or deuterated isotopes of a compound, they could also selectively avoid the corresponding isotope of a different odorant. The findings are inconsistent with a shape-only model for smell, the authors concluded, and suggest that flies can ‘smell molecular vibrations''.“The ability to detect heavy isotopes in a molecule by smell is a good test of shape and vibration theories: shape says it should be impossible, vibration says it should be doable,” explained Luca Turin from MIT, one of the study''s authors. Turin is a major proponent of the vibration theory and suggests that the transduction of molecular vibrations into receptor activation could be mediated by inelastic electron tunnelling (Fig 1; see also The scent of life, www.emboreports.org). “The results so far had been inconclusive and complicated by possible contamination of the test odorants with impurities,” Turin said. “Our work deals with impurities in a novel way, by asking flies whether the presence of deuterium isotope confers a common smell character to odorants, much in the way that the presence of -SH in a molecule makes it smell ‘sulphuraceous'', regardless of impurities. The flies'' answer seems to be ‘yes''.”Open in a separate windowFigure 1Diagram of a vibration-sensing receptor using an inelastic electron tunnelling mechanism. An odorant—here benzaldehyde—is depicted bound to a protein receptor that includes an electron donor site at the top left to which an electron—blue sphere—is bound. The electron can tunnel to an acceptor site at the bottom right while losing energy (vertical arrow) by exciting one or more vibrational modes of the benzaldehyde. When the electron reaches the acceptor, the signal is transduced via a G-protein mechanism, and the olfactory stimulus is triggered. Credit: Luca Turin.One of the study''s Greek co-authors, Efthimios Skoulakis, suggested that flies are better suited than humans at doing this experiment for a couple of reasons. “[The flies] seem to have better acuity than humans and they cannot anticipate the task they will be required to complete (as humans would), thus reducing bias in the outcome,” he said. “Drosophila does not need to detect deuterium per se to survive and be reproductively successful, so it is likely that detection of the vibrational difference between such a compound and its normal counterpart reflects a general property of olfactory systems.”The question of whether quantum mechanics really plays a non-trivial role in biology is still hotly debated by physicists and biologists alikeJennifer Brookes, a physicist at University College London, UK, explained that recent advances in determining whether quantum effects have a role in odorant recognition has involved assessing the physical violations of such a mechanism in the first instance, and finding that, given certain biological parameters, there are none. “The point being that if nature uses something like the quantized vibrations of molecules to ‘measure'' a smell then the idea is not—mathematically, physically and biologically—as eccentric as it at first seems,” she said. Moreover, there is the possibility that quantum mechanics could play a much broader role in biology than simply underpinning the sense of smell. “Odorants are not the only small molecules that interact unpredictably with large proteins; steroid hormones, anaesthetics and neurotransmitters, to name a few, are examples of ligands that interact specifically with special receptors to produce important biological processes,” Brookes wrote in a recent essay (Brookes, 2010).The question of whether quantum mechanics really plays a non-trivial role in biology is still hotly debated by physicists and biologists alike. “[A] non-trivial quantum effect in biology is one that would convince a biologist that they needed to take an advanced quantum mechanics course and learn about Hilbert space and operators etc., so that they could understand the effect,” argued theoretical quantum physicists Howard Wiseman and Jens Eisert in their contribution to the book Quantum Aspects of Life (Wiseman & Eisert, 2008). In their rational challenge to the general enthusiasm for a quantum revolution in biology, Wiseman and Eisert point out that a number of “exotic” and “implausible” quantum effects—including a quantum life principle, quantum computing in the brain, quantum computing in genetics, and quantum consciousness—have been suggested and warn researchers to be cautious of “ideas that are more appealing at first sight than they are realistic” (Wiseman & Eisert, 2008).“One could easily expect many more new exciting ideas and discoveries to emerge from the intersection of two major areas such as quantum physics and biology”Keeping this warning in mind, the view of life from a quantum perspective can still provide a deeper insight into the mechanisms that allow living organisms to thrive without succumbing to the increasing entropy of their environment. But does quantum biology have practical applications? “The investigation of the role of quantum physics in biology is fascinating because it could help explain why evolution has favoured some biological designs, as well as inspire us to develop more efficient artificial devices,” Omar said. The most often quoted examples of such devices are solar collectors that would use efficient energy transport mechanisms inspired by the quantum proficiency of natural light-harvesting systems, and quantum computing. But there is much more ahead. In 2010, the Pentagon''s cutting-edge research branch, DARPA (Defense Advanced Research Projects Agency, USA), launched a solicitation for innovative proposals in the area of quantum effects in a biological environment. “Proposed research should establish beyond any doubt that manifestly quantum effects occur in biology, and demonstrate through simulation proof-of-concept experiments that devices that exploit these effects could be developed into biomimetic sensors,” states the synopsis (DARPA, 2010). This programme will thus look explicitly at photosynthesis, magnetic field sensing and odour detection to lay the foundations for novel sensor technologies for military applications.Clearly a number of civil needs could also be fulfilled by quantum-based biosensors. Take, for example, the much sought-after ‘electronic nose'' that could replace the use of dogs to find drugs or explosives, or could assess food quality and safety. Such a device could even be used to detect cancer, as suggested by a recent publication from a Swedish team of researchers who reported that ovarian carcinomas emit a different array of volatile signals to normal tissue (Horvath et al, 2010). “Our goal is to be able to screen blood samples from apparently healthy women and so detect ovarian cancer at an early stage when it can still be cured,” said the study''s leading author György Horvath in a press release (University of Gothenburg, 2010).Despite its already long incubation time, quantum biology is still in its infancy but with an intriguing adolescence ahead. “A new wave of scientists are finding that quantum physics has the appropriate language and methods to solve many problems in biology, observing phenomena from a different point of view and developing new concepts. The next important steps are experimental verification/falsification,” Brookes said. “One could easily expect many more new exciting ideas and discoveries to emerge from the intersection of two major areas such as quantum physics and biology,” Omar concluded.  相似文献   

17.
Morris SC 《EMBO reports》2012,13(4):281-281
Transmembrane proteins with seven helices, whether they are in the insect ‘nose'' or the mammalian eye, are the molecule of choice for detecting the world. No matter the kingdom, evolution seems to settle on the optimal solution time and time again.How best to describe evolution? A drunkard''s walk; a shambling billion-year spree punctuated with prat-falls, accompanied by a Beckettian mumbling? Or a sleek greyhound rippling with suppressed energy, racing along the narrow highways of the Darwinian landscape? “Mumble and shuffle” would be the answer of most biologists, but perhaps next time we open our Darwin we should also turn up The Ride of the Valkyries.When reviewing the evolution of eyes, Russell Fernald hit the nail on the head when he remarked how the opsins have “proven irresistible for use in eyes” [1]. Indeed they have; not only do they belong to the vast family of G-protein-coupled receptors (GPCRs), but it is no accident that, in ears and noses, related transmembrane proteins with the canonical seven helices are also poised to transduce noise and smells into electrical signals and ultimately awareness.There is a comforting congruity in all this. Just as our eyes register the world through the opsins, in the compound eyes of insects the same proteins wait in attendance. But let us turn to the insect ‘nose''. Here, despite a radically different anatomy replete with antennal and maxillary sensilla, the arrangement turns out to be strikingly convergent in terms of operation with the mammalian schnozzle [2], but when we look at the molecular machinery something curious seems to be going on. One component, concentrated in the coeloconic sensilla, is tasked with detecting molecules such as alcohol and ammonia. Here, the machinery depends on the ionotropic glutamate receptors. This appears to be a classic case of co-option because not only are these receptors ancient [3], they also show fascinating links to synaptic receptors [4]. However, the bulk of the olfactory capacity looks to a series of transmembrane proteins. At first glance, complete with their seven helices spanning the sensory membrane, they look reassuringly like the ever-reliable GPCRs. Except they aren''t! Blink twice and then notice that these proteins are back to front so that the amino-terminal is cytoplasmic and the carboxy-terminal extracellular. This is completely opposite to the GPCRs [5], but surely it represents a trivial difference? On the contrary. Lurking in the insect ‘nose'' is a ligand-gated cation channel that at first sight looks practically identical to a GPCR but is completely unrelated [6].Maybe I am a bear of little brain, but is this not all a little peculiar? Why throw away a perfectly acceptable GPCR—which after all other ecdysozoans such as nematodes use—and install what is effectively a near-perfect mimic? A little trick to keep us on our Darwinian toes? Maybe a clue comes from the choanoflagellates. Central to their life is nitrogen metabolism, but rather oddly the genes they employ have been recruited from algae. “If it ain''t broke, don''t fix it”, except that Aurora Nedelcu and colleagues [7] suggest these imports turned out to be a notch better than the incumbent machinery. Spitfire versus Messerschmitt if you like; both superb aircraft, but the former had the edge.Perhaps a parallel argument applies to the insects. Their ‘noses'' might be functionally equivalent to those of mammals but insects live in a different world, zooming through the air at high speed and encountering smells in the form of narrow odour plumes separated by ‘clear'' air. Rather different from the leisurely inhalations of a large mammal; on the insect scale of things, time is of the essence [8]. This might also explain why there are a variety of transduction cascades, some linked to tried and tested methods but others evidently novel. We should, however, not lose sight of the central point. Be it in terms of fundamental configurations of olfactory design or the molecular machinery behind it, insects do indeed replay the tape of life, but with end results that are very much the same. With respect to the receptor protein, frankly who cares if it is a GPCR or a ligand-gated ion channel protein? They are completely unrelated, but the far more remarkable fact is that, in terms of transduction, the system evidently has no alternative. The molecule must be a seven-helix transmembrane protein; this is the molecule of choice. Evolution meets design: Darwin and Plato embrace.Irresistibly, evolution will navigate to this solution. Rest assured that on Threga IX—that charming little planet just to the left of Arcturus—eyes will flicker and noses will swivel beneath an alien sun. We can save ourselves all the fuss of an extremely expensive extraterrestrial excursion. In those alien eyes and noses, we can be quite certain that a seven-helix transmembrane protein will be busy telling its owner that the sunset is red and dinner is almost ready. Gin and tonic anybody?  相似文献   

18.
L Bornmann 《EMBO reports》2012,13(8):673-676
The global financial crisis has changed how nations and agencies prioritize research investment. There has been a push towards science with expected benefits for society, yet devising reliable tools to predict and measure the social impact of research remains a major challenge.Even before the Second World War, governments had begun to invest public funds into scientific research with the expectation that military, economic, medical and other benefits would ensue. This trend continued during the war and throughout the Cold War period, with increasing levels of public money being invested in science. Nuclear physics was the main benefactor, but other fields were also supported as their military or commercial potential became apparent. Moreover, research came to be seen as a valuable enterprise in and of itself, given the value of the knowledge generated, even if advances in understanding could not be applied immediately. Vannevar Bush, science advisor to President Franklin D. Roosevelt during the Second World War, established the inherent value of basic research in his report to the President, Science, the endless frontier, and it has become the underlying rationale for public support and funding of science.However, the growth of scientific research during the past decades has outpaced the public resources available to fund it. This has led to a problem for funding agencies and politicians: how can limited resources be most efficiently and effectively distributed among researchers and research projects? This challenge—to identify promising research—spawned both the development of measures to assess the quality of scientific research itself, and to determine the societal impact of research. Although the first set of measures have been relatively successful and are widely used to determine the quality of journals, research projects and research groups, it has been much harder to develop reliable and meaningful measures to assess the societal impact of research. The impact of applied research, such as drug development, IT or engineering, is obvious but the benefits of basic research are less so, harder to assess and have been under increasing scrutiny since the 1990s [1]. In fact, there is no direct link between the scientific quality of a research project and its societal value. As Paul Nightingale and Alister Scott of the University of Sussex''s Science and Technology Policy Research centre have pointed out: “research that is highly cited or published in top journals may be good for the academic discipline but not for society” [2]. Moreover, it might take years, or even decades, until a particular body of knowledge yields new products or services that affect society. By way of example, in an editorial on the topic in the British Medical Journal, editor Richard Smith cites the original research into apoptosis as work that is of high quality, but that has had “no measurable impact on health” [3]. He contrasts this with, for example, research into “the cost effectiveness of different incontinence pads”, which is certainly not seen as high value by the scientific community, but which has had an immediate and important societal impact.…the growth of scientific research during the past decades has outpaced the public resources available to fund itThe problem actually begins with defining the ‘societal impact of research''. A series of different concepts has been introduced: ‘third-stream activities'' [4], ‘societal benefits'' or ‘societal quality'' [5], ‘usefulness'' [6], ‘public values'' [7], ‘knowledge transfer'' [8] and ‘societal relevance'' [9, 10]. Yet, each of these concepts is ultimately concerned with measuring the social, cultural, environmental and economic returns from publicly funded research, be they products or ideas.In this context, ‘societal benefits'' refers to the contribution of research to the social capital of a nation, in stimulating new approaches to social issues, or in informing public debate and policy-making. ‘Cultural benefits'' are those that add to the cultural capital of a nation, for example, by giving insight into how we relate to other societies and cultures, by providing a better understanding of our history and by contributing to cultural preservation and enrichment. ‘Environmental benefits'' benefit the natural capital of a nation, by reducing waste and pollution, and by increasing natural preserves or biodiversity. Finally, ‘economic benefits'' increase the economic capital of a nation by enhancing its skills base and by improving its productivity [11].Given the variability and the complexity of evaluating the societal impact of research, Barend van der Meulen at the Rathenau Institute for research and debate on science and technology in the Netherlands, and Arie Rip at the School of Management and Governance of the University of Twente, the Netherlands, have noted that “it is not clear how to evaluate societal quality, especially for basic and strategic research” [5]. There is no accepted framework with adequate datasets comparable to,for example, Thomson Reuters'' Web of Science, which enables the calculation of bibliometric values such as the h index [12] or journal impact factor [13]. There are also no criteria or methods that can be applied to the evaluation of societal impact, whilst conventional research and development (R&D) indicators have given little insight, with the exception of patent data. In fact, in many studies, the societal impact of research has been postulated rather than demonstrated [14]. For Benoît Godin at the Institut National de la Recherche Scientifique (INRS) in Quebec, Canada, and co-author Christian Doré, “systematic measurements and indicators [of the] impact on the social, cultural, political, and organizational dimensions are almost totally absent from the literature” [15]. Furthermore, they note, most research in this field is primarily concerned with economic impact.A presentation by Ben Martin from the Science and Technology Policy Research Unit at Sussex University, UK, cites four common problems that arise in the context of societal impact measurements [16]. The first is the causality problem—it is not clear which impact can be attributed to which cause. The second is the attribution problem, which arises because impact can be diffuse or complex and contingent, and it is not clear what should be attributed to research or to other inputs. The third is the internationality problem that arises as a result of the international nature of R&D and innovation, which makes attribution virtually impossible. Finally, the timescale problem arises because the premature measurement of impact might result in policies that emphasize research that yields only short-term benefits, ignoring potential long-term impact.…in many studies, the societal impact of research has been postulated rather than demonstratedIn addition, there are four other problems. First, it is hard to find experts to assess societal impact that is based on peer evaluation. As Robert Frodeman and James Britt Holbrook at the University of North Texas, USA, have noted, “[s]cientists generally dislike impacts considerations” and evaluating research in terms of its societal impact “takes scientists beyond the bounds of their disciplinary expertise” [10]. Second, given that the scientific work of an engineer has a different impact than the work of a sociologist or historian, it will hardly be possible to have a single assessment mechanism [4, 17]. Third, societal impact measurement should take into account that there is not just one model of a successful research institution. As such, assessment should be adapted to the institution''s specific strengths in teaching and research, the cultural context in which it exists and national standards. Finally, the societal impact of research is not always going to be desirable or positive. For example, Les Rymer, graduate education policy advisor to the Australian Group of Eight (Go8) network of university vice-chancellors, noted in a report for the Go8 that, “environmental research that leads to the closure of a fishery might have an immediate negative economic impact, even though in the much longer term it will preserve a resource that might again become available for use. The fishing industry and conservationists might have very different views as to the nature of the initial impact—some of which may depend on their view about the excellence of the research and its disinterested nature” [18].Unlike scientific impact measurement, for which there are numerous established methods that are continually refined, research into societal impact is still in the early stages: there is no distinct community with its own series of conferences, journals or awards for special accomplishments. Even so, governments already conduct budget-relevant measurements, or plan to do so. The best-known national evaluation system is the UK Research Assessment Exercise (RAE), which has evaluated research in the UK since the 1980s. Efforts are under way to set up the Research Excellence Framework (REF), which is set to replace the RAE in 2014 “to support the desire of modern research policy for promoting problem-solving research” [21]. In order to develop the new arrangements for the assessment and funding of research in the REF, the Higher Education Funding Council for England (HEFCE) commissioned RAND Europe to review approaches for evaluating the impact of research [20]. The recommendation from this consultation is that impact should be measured in a quantifiable way, and expert panels should review narrative evidence in case studies supported by appropriate indicators [19,21].…premature measurement of impact might result in policies that emphasize research that yields only short-term benefits, ignoring potential long-term impactMany of the studies that have carried out societal impact measurement chose to do so on the basis of case studies. Although this method is labour-intensive and a craft rather than a quantitative activity, it seems to be the best way of measuring the complex phenomenon that is societal impact. The HEFCE stipulates that “case studies may include any social, economic or cultural impact or benefit beyond academia that has taken place during the assessment period, and was underpinned by excellent research produced by the submitting institution within a given timeframe” [22]. Claire Donovan at Brunel University, London, UK, considers the preference for a case-study approach in the REF to be “the ‘state of the art'' [for providing] the necessary evidence-base for increased financial support of university research across all fields” [23]. According to Finn Hansson from the Department of Leadership, Policy and Philosophy at the Copenhagen Business School, Denmark, and co-author Erik Ernø-Kjølhede, the new REF is “a clear political signal that the traditional model for assessing research quality based on a discipline-oriented Mode 1 perception of research, first and foremost in the form of publication in international journals, was no longer considered sufficient by the policy-makers” [19]. ‘Mode 1'' describes research governed by the academic interests of a specific community, whereas ‘Mode 2'' is characterized by collaboration—both within the scientific realm and with other stakeholders—transdisciplinarity and basic research that is being conducted in the context of application [19].The new REF will also entail changes in budget allocations. The evaluation of a research unit for the purpose of allocations will determine 20% of the societal influence dimension [19]. The final REF guidance contains lists of examples for different types of societal impact [24].Societal impact is much harder to measure than scientific impact, and there are probably no indicators that can be used across all disciplines and institutions for collation in databases [17]. Societal impact often takes many years to become apparent, and “[t]he routes through which research can influence individual behaviour or inform social policy are often very diffuse” [18].Yet, the practitioners of societal impact measurement should not conduct this exercise alone; scientists should also take part. According to Steve Hanney at Brunel University, an expert in assessing payback or impacts from health research, and his co-authors, many scientists see societal impact measurement as a threat to their scientific freedom and often reject it [25]. If the allocation of funds is increasingly oriented towards societal impact issues, it challenges the long-standing reward system in science whereby scientists receive credits—not only citations and prizes but also funds—for their contributions to scientific advancement. However, given that societal impact measurement is already important for various national evaluations—and other countries will follow probably—scientists should become more concerned with this aspect of their research. In fact, scientists are often unaware that their research has a societal impact. “The case study at BRASS [Centre for Business Relationships, Accountability, Sustainability and Society] uncovered activities that were previously ‘under the radar'', that is, researchers have been involved in activities they realised now can be characterized as productive interactions” [26] between them and societal stakeholders. It is probable that research in many fields already has a direct societal impact, or induces productive interactions, but that it is not yet perceived as such by the scientists conducting the work.…research into societal impact is still in the early stages: there is no distinct community with its own series of conferences, journals or awards for special accomplishmentsThe involvement of scientists is also necessary in the development of mechanisms to collect accurate and comparable data [27]. Researchers in a particular discipline will be able to identify appropriate indicators to measure the impact of their kind of work. If the approach to establishing measurements is not sufficiently broad in scope, there is a danger that readily available indicators will be used for evaluations, even if they do not adequately measure societal impact [16]. There is also a risk that scientists might base their research projects and grant applications on readily available and ultimately misleading indicators. As Hansson and Ernø-Kjølhede point out, “the obvious danger is that researchers and universities intensify their efforts to participate in activities that can be directly documented rather than activities that are harder to document but in reality may be more useful to society” [19]. Numerous studies have documented that scientists already base their activities on the criteria and indicators that are applied in evaluations [19, 28, 29].Until reliable and robust methods to assess impact are developed, it makes sense to use expert panels to qualitatively assess the societal relevance of research in the first instance. Rymer has noted that, “just as peer review can be useful in assessing the quality of academic work in an academic context, expert panels with relevant experience in different areas of potential impact can be useful in assessing the difference that research has made” [18].Whether scientists like it or not, the societal impact of their research is an increasingly important factor in attracting the public funding and support of basic researchWhether scientists like it or not, the societal impact of their research is an increasingly important factor in attracting public funding and support of basic research. This has always been the case, but new research into measures that can assess the societal impact of research would provide better qualitative and quantitative data on which funding agencies and politicians could base decisions. At the same time, such measurement should not come at the expense of basic, blue-sky research, given that it is and will remain near-impossible to predict the impact of certain research projects years or decades down the line.  相似文献   

19.
Samuel Caddick 《EMBO reports》2008,9(12):1174-1176
  相似文献   

20.
How easy is it to acquire an organelle? How easy is it to lose one? Michael Gray considers the latest evidence in this regard concerning the chromalveolates.How easy is it to acquire an organelle? How easy is it to lose one? These questions underpin the current debate about the evolution of the plastid—that is, chloroplast—the organelle of photosynthesis in eukaryotic cells.The origin of the plastid has been traced to an endosymbiosis between a eukaryotic host cell and a cyanobacterial symbiont, the latter gradually ceding genetic control to the former through endosymbiotic gene transfer (EGT). The resulting organelle now relies for its biogenesis and function on the expression of a small set of genes retained in the shrunken plastid genome, as well as a much larger set of transferred nuclear genes encoding proteins synthesized in the cytosol and imported into the organelle.This scenario accounts for the so-called primary plastids in green algae and their land plant relatives, in red algae and in glaucophytes, which together comprise Plantae (or Archaeplastida)—one of five or six recognized eukaryotic supergroups (Adl et al, 2005). In other algal types, plastids are ‘second-hand''—they have been acquired not by taking up a cyanobacterium, but by taking up a primary-plastid-containing eukaryote (sometimes a green alga, sometimes a red alga) to produce secondary plastids. In most of these cases, all that remains of the eukaryotic symbiont is its plastid; the genes coding for plastid proteins have moved from the endosymbiont to the host nucleus. A eukaryotic host—which may or may not itself have a plastid—might also take up a secondary-plastid symbiont (generating tertiary plastids), or a secondary-plastid host might take up a primary-plastid symbiont. You get the picture: plastid evolution is complicated!Several excellent recent reviews present expanded accounts of plastid evolution (Reyes-Prieto et al, 2007; Gould et al, 2008; Archibald, 2009; Keeling, 2009). Here, I focus on one particular aspect of plastid evolutionary theory, the ‘chromalveolate hypothesis'', proposed in 1999 by Tom Cavalier-Smith (1999).The chromalveolate hypothesis seeks to explain the origin of chlorophyll c-containing plastids in several eukaryotic groups, notably cryptophytes, alveolates (ciliates, dinoflagellates and apicomplexans), stramenopiles (heterokonts) and haptophytes—together dubbed the ‘chromalveolates''. The plastid-containing members of this assemblage are mainly eukaryotic algae with secondary plastids that were acquired through endosymbiosis with a red alga. The question is: how many times did such an endosymbiosis occur within the chromalveolate grouping?A basic tenet of the chromalveolate hypothesis is that the evolutionary conversion of an endosymbiont to an organelle should be an exceedingly rare event, and a hard task for a biological system to accomplish, because the organism has to ‘learn'' how to target a large number of nucleus-encoded proteins—the genes of many of which were acquired by EGT—back into the organelle. Our current understanding of this targeting process is detailed in the reviews cited earlier. Suffice it to say that the evolutionary requirements appear numerous and complex—sufficiently so that the chromalveolate hypothesis posits that secondary endosymbiosis involving a red alga happened only once, in a common ancestor of the various groups comprising the chromalveolates.Considerable molecular and phylogenetic data have been marshalled over the past decade in support of the chromalveolate hypothesis; however, no single data set specifically unites all chromalveolates, even though there is compelling evidence for various subgroup relationships (Keeling, 2009). Moreover, within the proposed chromalveolate assemblage, plastid-containing lineages are interspersed with plastid-lacking ones—for example, ciliates in the alveolates, and oomycetes such as Phytophthora in the stramenopiles. The chromalveolate hypothesis rationalizes such interspersion by assuming that the plastid was lost at some point during the evolution of the aplastidic lineages. The discovery in such aplastidic lineages of genes of putatively red algal origin, and in some cases suggestive evidence of a non-photosynthetic plastid remnant, would seem to be consistent with this thesis, although these instances are still few and far between.In this context, two recent papers are notable in that the authors seek to falsify, through rigorous testing, several explicit predictions of the chromalveolate hypothesis—and in both cases they succeed in doing so. Because molecular phylogenies have failed to either robustly support or robustly disprove the chromalveolate hypothesis, Baurain et al (2010) devised a phylogenomic falsification of the chromalveolate hypothesis that does not depend on full resolution of the eukaryotic tree. They argued that if the chlorophyll c-containing chromalveolate lineages all derive from a single red algal ancestor, then similar amounts of sequence from the three compartments should allow them to recover chromalveolate monophyly in all cases. The statistical support levels in their analysis refuted this prediction, leading them to “reject the chromalveolate hypothesis as falsified in favour of more complex evolutionary scenarios involving multiple higher order eukaryote–eukaryote endosymbioses”.In another study, Stiller et al (2009) applied statistical tests to several a priori assumptions relating to the finding of genes of supposed algal origin in the aplastidic chromalveolate taxon Phytophthora. These authors determined that the signal from these genes “is inconsistent with the chromalveolate hypothesis, and better explained by alternative models of sequence and genome evolution”.So, is the chromalveolate hypothesis dead? These new studies are certainly the most serious challenge yet. Additional data, including genome sequences of poorly characterized chromalveolate lineages, will no doubt augment comparative phylogenomic studies aimed at evaluating the chromalveolate hypothesis—which these days is looking decidedly shaky.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号