首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Sirtuins are a promising avenue for orally administered drugs that might deliver the anti-aging benefits normally provided by calorie restriction.Calorie (or dietary) restriction was first shown to extend rodent lifespan almost 80 years ago, and remains the most robust longevity-promoting intervention in mammals, genetic or dietary. Sirtuins are NAD-dependent deacylases homologous to yeast Sir2p and were first shown to extend replicative lifespan in budding yeast [1]. Because of their NAD requirement, sirtuins were proposed as mediators of the anti-ageing effects of calorie restriction [1]. Indeed, many studies in yeast, Caenorhabditis elegans, Drosophila melanogaster and mice have supported these ideas [2]. However, a 2011 paper posed a challenge: transgenic strains of C. elegans and Drosophila that overexpress SIR2 were found not to be long-lived [3].Rather than review the extensive sirtuin literature previous to that paper, I focus on a few key studies that have followed it, which underscore a conserved role of sirtuins in slowing ageing. In the first study, two highly divergent budding yeast strains—a lab strain and a clinical isolate—were crossed. A genome-wide quantitative trait locus analysis was then performed to map genes that determine differences in replicative lifespan [4]. The top hit was SIR2, explaining more than one-half of the difference in replicative lifespan between the two strains (due to five codon differences between the SIR2 alleles). In Drosophila, overexpression of dSIR2 in the fat body extended the lifespan of flies on the normal diet, whereas deletion of dSIR2 in the fat body abolished the extension of lifespan by a calorie-restriction-like protocol [5]. This example illustrates the key role of dSIR2 in lifespan determination and its central role in mediating dietary effects on longevity, discussed further below. Another study showed that two transgenic mouse lines that overexpress the mammalian SIRT6—mammals have seven sirtuins—had significantly extended lifespans [6]. Finally, a recent study clearly showed that worm sir2.1 could extend lifespan by regulating two distinct longevity pathways involving insulin-like signalling and the mitochondrial unfolded protein response [7]. All told, this body of work supports the original proposal that sirtuins are conserved mediators of longevity.Many other studies also illustrate that sirtuins can mediate the effects of diet. As an example, calorie restriction completely protected against ageing-induced hearing loss in wild type but not SIRT3−/− mice [8]. The mitochondrial sirtuin SIRT3 thus helps to protect the neurons of the inner ear against oxidative damage during calorie restriction. Of course, these studies do not imply that sirtuins are the only mediators of calorie restriction effects, but they do indicate that they must be central components.Finally, what about the translational potential of this research, namely using putative SIRT1-activating compounds—resveratrol and newer, synthetic STACs? Two new studies provide strong evidence that the effects of these compounds really do occur through SIRT1. First, acute deletion of SIRT1 in adult mice prevented many of the physiological effects of resveratrol and other STACs [9]. Second, a single mutation adjacent to the SIRT1 catalytic domain abolished the ability of STACs to activate the enzyme in vitro, or to promote the canonical physiological changes in vivo [10].In summary, sirtuins seem to represent a promising avenue by which orally available drugs might deliver anti-ageing benefits normally triggered by calorie restriction. Indeed, the biology of sirtuins is complex and diverse, but this is an indication of their deep reach into key disease processes. Connections between sirtuins and cancer metabolism are but one new example of this. The future path of discovery promises to be exciting and might lead to new drugs that maintain robust health.  相似文献   

2.
The differentiation of pluripotent stem cells into various progeny is perplexing. In vivo, nature imposes strict fate constraints. In vitro, PSCs differentiate into almost any phenotype. Might the concept of ‘cellular promiscuity'' explain these surprising behaviours?John Gurdon''s [1] and Shinya Yamanaka''s [2] Nobel Prize involves discoveries that vex fundamental concepts about the stability of cellular identity [3,4], ageing as a rectified path and the differences between germ cells and somatic cells. The differentiation of pluripotent stem cells (PSCs) into progeny, including spermatids [5] and oocytes [6], is perplexing. In vivo, nature imposes strict fate constraints. Yet in vitro, reprogrammed PSCs liberated from the body government freely differentiate into any phenotype—except placenta—violating even somatic cell against germ cell segregations. Albeit that it is anthropomorphic, might the concept of ‘cellular promiscuity'' explain these surprising behaviours?Fidelity to one''s differentiated state is nearly universal in vivo—even cancers retain some allegiance. Appreciating the mechanisms in vitro that liberate reprogrammed cells from the numerous constraints governing development in vivo might provide new insights. Similarly to highway guiderails, a range of constraints preclude progeny cells within embryos and organisms from travelling too far away from the trajectory set by their ancestors. Restrictions are imposed externally—basement membranes and intercellular adhesions; internally—chromatin, cytoskeleton, endomembranes and mitochondria; and temporally by ageing.‘Cellular promiscuity'' was glimpsed previously during cloning; it was seen when somatic cells successfully ‘fertilized'' enucleated oocytes in amphibians [1] and later with ‘Dolly'' [7]. Embryonic stem cells (ESCs) corroborate this. The inner cell mass of the blastocyst cells develops faithfully, but liberation from the trophoectoderm generates pluripotent ESCs in vitro, which are freed from fate and polarity restrictions. These freedom-seeking ESCs still abide by three-dimensional rules as they conform to chimaera body patterning when injected into blastocysts. Yet if transplanted elsewhere, this results in chaotic teratomas or helter-skelter in vitro differentiation—that is, pluripotency.August Weismann''s germ plasm theory, 130 years ago, recognized that gametes produce somatic cells, never the reverse. Primordial germ cell migrations into fetal gonads, and parent-of-origin imprints, explain how germ cells are sequestered, retaining genomic and epigenomic purity. Left uncontaminated, these future gametes are held in pristine form to parent the next generation. However, the cracks separating germ and somatic lineages in vitro are widening [5,6]. Perhaps, they are restrained within gonads not for their purity but to prevent wild, uncontrolled misbehaviours resulting in germ cell tumours.The ‘cellular promiscuity'' concept regarding PSCs in vitro might explain why cells of nearly any desired lineage can be detected using monospecific markers. Are assays so sensitive that rare cells can be detected in heterogeneous cultures? Certainly population heterogeneity is considered for transplantable cells—dopaminergic neurons and islet cells—compared with applications needing few cells—sperm and oocytes. This dilemma of maintaining cellular identity in vitro after reprogramming is significant. If not addressed, the value of unrestrained induced PSCs (iPSCs) as reliable models for ‘diseases in a dish'', let alone for subsequent therapeutic transplantations, might be diminished. X-chromosome re-inactivation variants in differentiating human PSCs, epigenetic imprint errors and copy number variations are all indicators of in vitro infidelity. PSCs, which are held to be undifferentiated cells, are artefacts after all, as they undergo their programmed development in vivo.If correct, the hypothesis accounts for concerns raised about the inherent genomic and epigenomic unreliability of iPSCs; they are likely to be unfaithful to their in vivo differentiation trajectories due to both the freedom from in vivo developmental programmes, as well as poorly characterized modifications in culture conditions. ‘Memory'' of the PSC''s identity in vivo might need to be improved by using approaches that might not fully erase imprints. Regulatory authorities, including the Food & Drug Administration, require evidence that cultured PSCs do retain their original cellular identity. Notwithstanding fidelity lapses at the organismal level, the recognition that our cells have intrinsic freedom-loving tendencies in vitro might generate better approaches for only partly releasing somatic cells into probation, rather than full emancipation.  相似文献   

3.
Antony M Dean 《EMBO reports》2010,11(6):409-409
Antony Dean explores the past, present and future of evolutionary theory and our continuing efforts to explain biological patterns in terms of molecular processes and mechanisms.There are just two questions to be asked in evolution: how are things related, and what makes them differ? Lamarck was the first biologist—he invented the word—to address both. In his Philosophie Zoologique (1809) he suggested that the relationships among species are better described by branching trees than by a simple ladder, that new species arise gradually by descent with modification and that they adapt to changing environments through the inheritance of acquired characteristics. Much that Lamarck imagined has since been superseded. Following Wallace and Darwin, we now envision that species belong to a single highly branched tree and that natural selection is the mechanism of adaptation. Nonetheless, to Lamarck we owe the insight that pattern is produced by process and that both need mechanistic explanation.Questions of pattern, process and mechanism pervade the modern discipline of molecular evolution. The field was established when Zuckerkandl & Pauling (1965) noted that haemoglobins evolve at a roughly constant rate. Their “molecular evolutionary clock” forever changed our view of evolutionary history. Not only were seemingly intractable relationships resolved—for example, whales are allies of the hippopotamus—but also the eubacterial origins of eukaryotic organelles were firmly established and a new domain of life was discovered: the Archaea.Yet, different genes sometimes produce different trees. Golding & Gupta (1995) resolved two-dozen conflicting protein trees by suggesting that Eukarya arose following massive horizontal gene transfer between Bacteria and Archaea. Whole genome sequencing has since revealed so many conflicts that horizontal gene transfer seems characteristic of prokaryote evolution. In higher animals—where horizontal transfer is sufficiently rare that the tree metaphor remains robust—rapid and inexpensive whole genome sequencing promises to provide a wealth of data for population studies. The patterns of migration, admixture and divergence of species will be soon addressed in unprecedented detail.Sequence analyses are also used to infer processes. A constant molecular clock originally buttressed the neutral theory of molecular evolution (Kimura, 1985). The clock has since proven erratic, while the neutral theory now serves as a null hypothesis for statistical tests of ‘selection''. In truth, most tests are also sensitive to demographic changes. The promise of ultra-high throughput sequencing to provide genome-wide data should help dissect selection, which targets particular genes, from demography, which affects all the genes in a genome, although weak selection and ancient adaptations will remain undetected.In the functional synthesis (Dean & Thornton, 2007), molecular biology provides the experimental means to test evolutionary inferences decisively. For example, site-directed mutagenesis can be used to introduce putatively selected mutations into reconstructed ancestral sequences, the gene products are then expressed and purified and their functional properties determined in vitro. In microbial species, homologous recombination is used routinely to replace wild-type with engineered genes, enabling organismal phenotypes and fitnesses to be determined in vivo. The vision of Zuckerkandl & Pauling (1965) that by “furnishing probable structures of ancestral proteins, chemical paleogenetics will in the future lead to deductions concerning molecular functions as they were presumably carried out in the distant evolutionary past” is now a reality.If experimental tests of evolutionary inferences open windows on past mechanisms, directed evolution focuses on the mechanisms without attempting historical reconstruction. Today''s ‘fast-forward'' molecular breeding experiments use mutagenic PCR to generate vast libraries of variation and high throughput screens to identify rare novel mutants (Romero & Arnold, 2009; Khersonsky & Tawfik, 2010). Among numerous topics explored are: the role of intragenic recombination in furthering adaptation, the number and location of mutations in protein structures, the necessity—or lack thereof—of broadening substrate specificity before a new function is acquired, the evolution of robustness, and the alleged trade-off between stability and catalytic efficiency. Few, however, have approached the detail found in those classic studies of evolved β-galactosidase (Hall, 2003) that revealed how the free-energy profile of an enzyme-catalysed reaction evolved. Even further removed from natural systems are catalytic RNAs that, by combining phenotype and genotype within the same molecule, allow evolution to proceed in a lifeless series of chemical reactions. Recently, two RNA enzymes that catalyse each other''s synthesis were shown to undergo self-sustained exponential amplification (Lincoln & Joyce, 2009). Competition for limiting tetranucleotide resources favours mutants with higher relative fitness—faster replication—demonstrating that adaptive evolution can occur in a chemically defined abiotic genetic system.Lamarck was the first to attempt a coherent explanation of biological patterns in terms of processes and mechanisms. That his legacy can still be discerned in the vibrant field of molecular evolution would no doubt please him as much as it does us in promising extraordinary advances in our understanding of the mechanistic basis of molecular adaptation.  相似文献   

4.
The public view of life-extension technologies is more nuanced than expected and researchers must engage in discussions if they hope to promote awareness and acceptanceThere is increasing research and commercial interest in the development of novel interventions that might be able to extend human life expectancy by decelerating the ageing process. In this context, there is unabated interest in the life-extending effects of caloric restriction in mammals, and there are great hopes for drugs that could slow human ageing by mimicking its effects (Fontana et al, 2010). The multinational pharmaceutical company GlaxoSmithKline, for example, acquired Sirtris Pharmaceuticals in 2008, ostensibly for their portfolio of drugs targeting ‘diseases of ageing''. More recently, the immunosuppressant drug rapamycin has been shown to extend maximum lifespan in mice (Harrison et al, 2009). Such findings have stoked the kind of enthusiasm that has become common in media reports of life-extension and anti-ageing research, with claims that rapamycin might be “the cure for all that ails” (Hasty, 2009), or that it is an “anti-aging drug [that] could be used today” (Blagosklonny, 2007).Given the academic, commercial and media interest in prolonging human lifespan—a centuries-old dream of humanity—it is interesting to gauge what the public thinks about the possibility of living longer, healthier lives, and to ask whether they would be willing to buy and use drugs that slow the ageing process. Surveys that have addressed these questions, have given some rather surprising results, contrary to the expectations of many researchers in the field. They have also highlighted that although human life extension (HLE) and ageing are topics with enormous implications for society and individuals, scientists have not communicated efficiently with the public about their research and its possible applications.Given the academic, commercial and media interest in prolonging human lifespan […] it is interesting to gauge what the public thinks about the possibility of living longer, healthier lives…Proponents and opponents of HLE often assume that public attitudes towards ageing interventions will be strongly for or against, but until now, there has been little empirical evidence with which to test these assumptions (Lucke & Hall, 2005). We recently surveyed members of the public in Australia and found a variety of opinions, including some ambivalence towards the development and use of drugs that could slow ageing and increase lifespan. Our findings suggest that many members of the public anticipate both positive and negative outcomes from this work (Partridge 2009a, b, 2010; Underwood et al, 2009).In a community survey of public attitudes towards HLE we found that around two-thirds of a sample of 605 Australian adults supported research with the potential to increase the maximum human lifespan by slowing ageing (Partridge et al, 2010). However, only one-third expressed an interest in using an anti-ageing pill if it were developed. Half of the respondents were not interested in personally using such a pill and around one in ten were undecided.Some proponents of HLE anticipate their research being impeded by strong public antipathy (Miller, 2002, 2009). Richard Miller has claimed that opposition to the development of anti-ageing interventions often exists because of an “irrational public predisposition” to think that increased lifespans will only lead to elongation of infirmity. He has called this “gerontologiphobia”—a shared feeling among laypeople that while research to cure age-related diseases such as dementia is laudable, research that aims to intervene in ageing is a “public menace” (Miller, 2002).We found broad support for the amelioration of age-related diseases and for technologies that might preserve quality of life, but scepticism about a major promise of HLE—that it will delay the onset of age-related diseases and extend an individual''s healthy lifespan. From the people we interviewed, the most commonly cited potential negative personal outcome of HLE was that it would extend the number of years a person spent with chronic illnesses and poor quality of life (Partridge et al, 2009a). Although some members of the public envisioned more years spent in good health, almost 40% of participants were concerned that a drug to slow ageing would do more harm than good to them personally; another 13% were unsure about the benefits and costs (Partridge et al, 2010).…it might be that advocates of HLE have failed to persuade the public on this issueIt would be unwise to label such concerns as irrational, when it might be that advocates of HLE have failed to persuade the public on this issue. Have HLE researchers explained what they have discovered about ageing and what it means? Perhaps the public see the claims that have been made about HLE as ‘too good to be true‘.Results of surveys of biogerontologists suggest that they are either unaware or dismissive of public concerns about HLE. They often ignore them, dismiss them as “far-fetched”, or feel no responsibility “to respond” (Settersten Jr et al, 2008). Given this attitude, it is perhaps not surprising that the public are sceptical of their claims.Scientists are not always clear about the outcomes of their work, biogerontologists included. Although the life-extending effects of interventions in animal models are invoked as arguments for supporting anti-ageing research, it is not certain that these interventions will also extend healthy lifespans in humans. Miller (2009) reassuringly claims that the available evidence consistently suggests that quality of life is maintained in laboratory animals with extended lifespans, but he acknowledges that the evidence is “sparse” and urges more research on the topic (Miller, 2009). In the light of such ambiguity, researchers need to respond to public concerns in ways that reflect the available evidence and the potential of their work, without becoming apostles for technologies that have not yet been developed. An anti-ageing drug that extends lifespan without maintaining quality of life is clearly undesirable, but the public needs to be persuaded that such an outcome can be avoided.The public is also concerned about the possible adverse side effects of anti-ageing drugs. Many people were bemused when they discovered that members of the Caloric Restriction Society experienced a loss of libido and loss of muscle mass as a result of adhering to a low-calorie diet to extend their longevity—for many people, such side effects would not be worth the promise of some extra years of life. Adverse side effects are acknowledged as a considerable potential challenge to the development of an effective life-extending drug in humans (Fontana et al, 2010). If researchers do not discuss these possible effects, then a curious public might draw their own conclusions.Adverse side effects are acknowledged as a considerable potential challenge to the development of an effective life-extending drug in humansSome HLE advocates seem eager to tout potential anti-ageing drugs as being free from adverse side effects. For example, Blagosklonny (2007) has argued that rapamycin could be used to prevent age-related diseases in humans because it is “a non-toxic, well tolerated drug that is suitable for everyday oral administration” with its major “side-effects” being anti-tumour, bone-protecting, and mimicking caloric restriction effects. By contrast, Kaeberlein & Kennedy (2009) have advised the public against using the drug because of its immunosuppressive effects.Aubrey de Grey has called for scientists to provide more optimistic timescales for HLE on several occasions. He claims that public opposition to interventions in ageing is based on “extraordinarily transparently flawed opinions” that HLE would be unethical and unsustainable (de Grey, 2004). In his view, public opposition is driven by scepticism about whether HLE will be possible, and that concerns about extending infirmity, injustice or social harms are simply excuses to justify people''s belief that ageing is ‘not so bad'' (de Grey, 2007). He argues that this “pro-ageing trance” can only be broken by persuading the public that HLE technologies are just around the corner.Contrary to de Grey''s expectations of public pessimism, 75% of our survey participants thought that HLE technologies were likely to be developed in the near future. Furthermore, concerns about the personal, social and ethical implications of ageing interventions and HLE were not confined to those who believed that HLE is not feasible (Partridge et al, 2010).Juengst et al (2003) have rightly pointed out that any interventions that slow ageing and substantially increase human longevity might generate more social, economic, political, legal, ethical and public health issues than any other technological advance in biomedicine. Our survey supports this idea; the major ethical concerns raised by members of the public reflect the many and diverse issues that are discussed in the bioethics literature (Partridge et al, 2009b; Partridge & Hall, 2007).When pressed, even enthusiasts admit that a drastic extension of human life might be a mixed blessing. A recent review by researchers at the US National Institute on Aging pointed to several economic and social challenges that arise from longevity extension (Sierra et al, 2009). Perry (2004) suggests that the ability to slow ageing will cause “profound changes” and a “firestorm of controversy”. Even de Grey (2005) concedes that the development of an effective way to slow ageing will cause “mayhem” and “absolute pandemonium”. If even the advocates of anti-ageing and HLE anticipate widespread societal disruption, the public is right to express concerns about the prospect of these things becoming reality. It is accordingly unfair to dismiss public concerns about the social and ethical implications as “irrational”, “inane” or “breathtakingly stupid” (de Grey, 2004).The breadth of the possible implications of HLE reinforces the need for more discussion about the funding of such research and management of its outcomes ( Juengst et al, 2003). Biogerontologists need to take public concerns more seriously if they hope to foster support for their work. If there are misperceptions about the likely outcomes of intervention in ageing, then biogerontologists need to better explain their research to the public and discuss how their concerns will be addressed. It is not enough to hope that a breakthrough in human ageing research will automatically assuage public concerns about the effects of HLE on quality of life, overpopulation, economic sustainability, the environment and inequities in access to such technologies. The trajectories of other controversial research areas—such as human embryonic stem cell research and assisted reproductive technologies (Deech & Smajdor, 2007)—have shown that “listening to public concerns on research and responding appropriately” is a more effective way of fostering support than arrogant dismissal of public concerns (Anon, 2009).Biogerontologists need to take public concerns more seriously if they hope to foster support for their work? Open in a separate windowBrad PartridgeOpen in a separate windowJayne LuckeOpen in a separate windowWayne Hall  相似文献   

5.
Two articles—one published online in January and in the March issue EMBO reports—implicate autophagy in the control of appetite by regulating neuropeptide production in hypothalamic neurons. Autophagy decline with age in POMC neurons induces obesity and metabolic syndrome.Kaushik et al. EMBO reports, this issue doi:10.1038/embor.2011.260Macroautophagy, which I will call autophagy, is a critical process that degrades bulk cytoplasm, including organelles, protein oligomers and a range of selective substrates. It has been linked with diverse physiological and disease-associated functions, including the removal of certain bacteria, protein oligomers associated with neurodegenerative diseases and dysfunctional mitochondria [1]. However, the primordial role of autophagy—conserved from yeast to mammals—appears to be its ability to provide nutrients to starving cells by releasing building blocks, such as amino acids and free fatty acids, obtained from macromolecular degradation. In yeast, autophagy deficiency enhances death in starvation conditions [2], and in mice it causes death from starvation in the early neonatal period [3,4]. Two recent articles from the Singh group—one of them in this issue of EMBO reports—also implicate autophagy in central appetite regulation [5,6].Autophagy seems to decline with age in the liver [7], and it has thus been assumed that autophagy declines with age in all tissues, but this has not been tested rigorously in organs such as the brain. Conversely, specific autophagy upregulation in Caenorhabditis elegans and Drosophila extends lifespan, and drugs that induce autophagy—but also perturb unrelated processes, such as rapamycin—promote longevity in rodents [8].Autophagy literally means self-eating, and it is therefore interesting to see that this cellular ‘self-eating'' has systemic roles in mammalian appetite control. The control of appetite is influenced by central regulators, including various hormones and neurotransmitters, and peripheral regulators, including hormones, glucose and free fatty acids [9]. Autophagy probably has peripheral roles in appetite and energy balance, as it regulates lipolysis and free fatty acid release [10]. Furthermore, Singh and colleagues have recently implicated autophagy in central appetite regulation [5,6].The arcuate nucleus in the hypothalamus has received extensive attention as an integrator and regulator of energy homeostasis and appetite. Through its proximity to the median eminence, which is characterized by an incomplete blood–brain barrier, these neurons rapidly sense metabolic fluctuations in the blood. There are two different neuronal populations in the arcuate nucleus, which appear to have complementary effects on appetite (Fig 1). The proopiomelanocortin (POMC) neurons produce the neuropeptide precursor POMC, which is cleaved to form α-melanocyte stimulating hormone (α-MSH), among several other products. The α-MSH secreted from these neurons activates melanocortin 4 receptors on target neurons in the paraventricular nucleus of the hypothalamus, which ultimately reduce food intake. The second group of neurons contain neuropeptide Y (NPY) and Agouti-related peptide (AgRP). Secreted NPY binds to downstream neuronal receptors and stimulates appetite. AgRP blocks the ability of α-MSH to activate melanocortin 4 receptors [11]. Furthermore, AgRP neurons inhibit POMC neurons [9].Open in a separate windowFigure 1Schematic diagram illustrating the complementary roles of POMC and NPY/AgRP neurons in appetite control. AgRP, Agouti-related peptide; MC4R, melanocortin 4 receptor; α-MSH, α-melanocyte stimulating hormone; NPY, neuropeptide Y; POMC, proopiomelanocortin.The first study from Singh''s group started by showing that starvation induces autophagy in the hypothalamus [5]. This finding alone merits some comment. Autophagy is frequently assessed by using phosphatidylethanolamine-conjugated Atg8/LC3 (LC3-II), which is specifically associated with autophagosomes and autolysosomes. LC3-II levels on western blot and the number of LC3-positive vesicles strongly correlate with the number of autophagosomes [1]. To assess whether LC3-II formation is altered by a perturbation, its level can be assessed in the presence of lysosomal inhibitors, which inhibit LC3-II degradation by blocking autophagosome–lysosome fusion [12]. Therefore, differences in LC3-II levels in response to a particular perturbation in the presence of lysosomal inhibitors reflect changes in autophagosome synthesis. An earlier study using GFP-LC3 suggested that autophagy was not upregulated in the brains of starved mice, compared with other tissues where this did occur [13]. However, this study only measured steady state levels of autophagosomes and was performed before the need for lysosomal inhibitors was appreciated. Subsequent work has shown rapid flux of autophagosomes to lysosomes in primary neurons, which might confound analyses without lysosomal inhibitors [14]. Thus, the data of the Singh group—showing that autophagy is upregulated in the brain by a range of methods including lysosomal inhibitors [5]—address an important issue in the field and corroborate another recent study that examined this question by using sophisticated imaging methods [15].“…decreasing autophagy with ageing in POMC neurons could contribute to the metabolic problems associated with age”Singh and colleagues then analysed mice that have a specific knockout of the autophagy gene Atg7 in AgRP neurons [5]. Although fasting increases AgRP mRNA and protein levels in normal mice, these changes were not seen in the knockout mice. AgRP neurons provide inhibitory signals to POMC neurons, and Kaushik and colleagues found that the AgRP-specific Atg7 knockout mice had higher levels of POMC and α-MSH, compared with the normal mice. This indicated that starvation regulates appetite in a manner that is partly dependent on autophagy. The authors suggested that the peripheral free fatty acids released during starvation induce autophagy by activating AMP-activated protein kinase (AMPK), a known positive regulator of autophagy. This, in turn, enhances degradation of hypothalamic lipids and increases endogenous intracellular free fatty acid concentrations. The increased intracellular free fatty acids upregulate AgRP mRNA and protein expression. As AgRP normally inhibits POMC/α-MSH production in target neurons, a defect in AgRP responses in the autophagy-null AgRP neurons results in higher α-MSH levels, which could account for the decreased mouse bodyweight.In follow-up work, Singh''s group have now studied the effects of inhibiting autophagy in POMC neurons, again using Atg7 deletion [6]. These mice, in contrast to the AgRP autophagy knockouts, are obese. This might be accounted for, in part, by an increase in POMC preprotein levels and its cleavage product adrenocorticotropic hormone in the knockout POMC neurons, which is associated with a failure to generate α-MSH. Interestingly, these POMC autophagy knockout mice have impaired peripheral lipolysis in response to starvation, which the authors suggest might be due to reduced central sympathetic tone to the periphery from the POMC neurons. In addition, POMC-neuron-specific Atg7 knockout mice have impaired glucose tolerance.This new study raises several interesting issues. How does the autophagy defect in the POMC neurons alter the cleavage pattern of POMC? Is this modulated within the physiological range of autophagy activity fluctuations in response to diet and starvation? Importantly, in vivo, autophagy might fluctuate similarly (or possibly differently) in POMC and AgRP neurons in response to diet and/or starvation. Given the tight interrelation of these neurons, how does this affect their overall response to appetite regulation in wild-type animals?Finally, the study also shows that hypothalamic autophagosome formation is decreased in older mice. To my knowledge, this is the first such demonstration of this phenomenon in the brain. The older mice phenocopied aspects of the POMC-neuron autophagy null mice—increased hypothalamic POMC preprotein and ACTH and decreased α-MSH, along with similar adiposity and lipolytic defects, compared with young mice. These data are provocative from several perspectives. In the context of metabolism, it is tantalizing to consider that decreasing autophagy with ageing in POMC neurons could contribute to the metabolic problems associated with ageing. Again, this model considers the POMC neurons in isolation, and it would be important to understand how reduced autophagy in aged AgRP neurons counterbalances this situation. In a more general sense, the data strongly support the concept that neuronal autophagy might decline with age.Autophagy is a major clearance route for many mutant, aggregate-prone intracytoplasmic proteins that cause neurodegenerative disease, such as tau (Alzheimer disease), α-synuclein (Parkinson disease), and huntingtin (Huntington disease), and the risk of these diseases is age-dependent [1]. Thus, it is tempting to suggest that the dramatic age-related risks for these diseases could be largely due to decreased neuronal capacity of degrading these toxic proteins. Neurodegenerative pathology and age-related metabolic abonormalities might be related—some of the metabolic disturbances that occur in humans with age could be due to the accumulation of such toxic proteins. High levels of these proteins are seen in many people who do not have, or who have not yet developed, neurodegenerative diseases, as many of them start to accumulate decades before any sign of disease. These proteins might alter metabolism and appetite either directly by affecting target neurons, or by influencing hormonal and neurotransmitter inputs into such neurons.  相似文献   

6.
7.
Of mice and men     
Thomas Erren and colleagues point out that studies on light and circadian rhythmicity in humans have their own interesting pitfalls, of which all researchers should be mindful.We would like to compliment, and complement, the recent Opinion in EMBO reports by Stuart Peirson and Russell Foster (2011), which calls attention to the potential obstacles associated with linking observations on light and circadian rhythmicity made on nocturnal mice to diurnally active humans. Pitfalls to consider include that qualitative extrapolations from short-lived rodents to long-lived humans, quantitative extrapolations of very different doses (Gold et al, 1992), and the varying sensitivities of each species to experimental optical radiation as a circadian stimulus (Bullough et al, 2006) can all have a critical influence on an experiment. Thus, Peirson & Foster remind us that “humans are not big mice”. We certainly agree, but we also thought it worthwhile to point out that human studies have their own interesting pitfalls, of which all researchers should be mindful.Many investigations with humans—such as testing the effects of different light exposures on alertness, cognitive performance, well-being and depression—can suffer from what has been coined as the ‘Hawthorne effect''. The term is derived from a series of studies conducted at the Western Electric Company''s Hawthorne Works near Chicago, Illinois, between 1924 and 1932, to test whether the productivity of workers would change with changing illumination levels. One important punch line was that productivity increased with almost any change that was made at the workplaces. One prevailing interpretation of these findings is that humans who know that they are being studied—and in most investigations they cannot help but notice—might exhibit responses that have little or nothing to do with what was intended as the experiment. Those who conduct circadian biology studies in humans try hard to eliminate possible ‘Hawthorne effects'', but every so often, all they can do is to hope for the best and expect the Hawthorne effect to be insignificant.Even so, and despite the obstacles to circadian experiments with both mice and humans, the wealth of information from work in both species is indispensable. To exemplify, in the last handful of years alone, experimental research in mice has substantially contributed to our understanding of the retinal interface between visible light and circadian circuitry (Chen et al, 2011); has shown that disturbances of the circadian systems through manipulations of the light–dark cycles might accelerate carcinogenesis (Filipski et al, 2009); and has suggested that perinatal light exposure—through an imprinting of the stability of circadian systems (Ciarleglio et al, 2011)—might be related to a human''s susceptibility to mood disorders (Erren et al, 2011a) and internal cancer developments later in life (Erren et al, 2011b). Future studies in humans must now examine whether, and to what extent, what was found in mice is applicable to and relevant for humans.The bottom line is that we must be aware of, and first and foremost exploit, evolutionary legacies, such as the seemingly ubiquitous photoreceptive clockwork that marine and terrestrial vertebrates—including mammals such as mice and humans—share (Erren et al, 2008). Translating insights from studies in animals to humans (Erren et al, 2011a,b), and vice versa, into testable research can be a means to one end: to arrive at sensible answers to pressing questions about light and circadian clockworks that, no doubt, play key roles in human health and disease. Pitfalls, however, abound on either side, and we agree with Peirson & Foster that they have to be recognized and monitored.  相似文献   

8.
Geoffrey Miller 《EMBO reports》2012,13(10):880-884
Runaway consumerism imposes social and ecological costs on humans in much the same way that runaway sexual ornamentation imposes survival costs and extinction risks on other animals.Sex and marketing have been coupled for a very long time. At the cultural level, their relationship has been appreciated since the 1960s ‘Mad Men'' era, when the sexual revolution coincided with the golden age of advertising, and marketers realized that ‘sex sells''. At the biological level, their interplay goes much further back to the Cambrian explosion around 530 million years ago. During this period of rapid evolutionary expansion, multicellular organisms began to evolve elaborate sexual ornaments to advertise their genetic quality to the most important consumers of all in the great mating market of life: the opposite sex.Maintaining the genetic quality of one''s offspring had already been a problem for billions of years. Ever since life originated around 3.7 billion years ago, RNA and DNA have been under selection to copy themselves as accurately as possible [1]. Yet perfect self-replication is biochemically impossible, and almost all replication errors are harmful rather than helpful [2]. Thus, mutations have been eroding the genomic stability of single-celled organisms for trillions of generations, and countless lineages of asexual organisms have suffered extinction through mutational meltdown—the runaway accumulation of copying errors [3]. Only through wildly profligate self-cloning could such organisms have any hope of leaving at least a few offspring with no new harmful mutations, so they could best survive and reproduce.Around 1.5 billion years ago, bacteria evolved the most basic form of sex to minimize mutation load: bacterial conjugation [4]. By swapping bits of DNA across the pilus (a tiny intercellular bridge) a bacterium can replace DNA sequences compromised by copying errors with intact sequences from its peers. Bacteria finally had some defence against mutational meltdown, and they thrived and diversified.Then, with the evolution of genuine sexual reproduction through meiosis, perhaps around 1.2 billion years ago, eukaryotes made a great advance in their ability to purge mutations. By combining their genes with a mate''s genes, they could produce progeny with huge genetic variety—and crucially with a wider range of mutation loads [5]. The unlucky offspring who happened to inherit an above-average number of harmful mutations from both parents would die young without reproducing, taking many mutations into oblivion with them. The lucky offspring who happened to inherit a below-average number of mutations from both parents would live long, prosper and produce offspring of higher genetic quality. Sexual recombination also made it easier to spread and combine the rare mutations that happened to be useful, opening the way for much faster evolutionary advances [6]. Sex became the foundation of almost all complex life because it was so good at both short-term damage limitation (purging bad mutations) and long-term innovation (spreading good mutations).Sex became the foundation of almost all complex life because it was so good at both short-term damage limitation […] and long-term innovation…Yet, single-celled organisms always had a problem with sex: they were not very good at choosing sexual partners with the best genes, that is, the lowest mutation loads. Given bacterial capabilities for chemical communication such as quorum-sensing [7], perhaps some prokaryotes and eukaryotes paid attention to short-range chemical cues of genetic quality before swapping genes. However, mating was mainly random before the evolution of longer-range senses and nervous systems.All of this changed profoundly with the Cambrian explosion, which saw organisms undergoing a genetic revolution that increased the complexity of gene regulatory networks, and a morphological revolution that increased the diversity of multicellular body plans. It was also a neurological and psychological revolution. As organisms became increasingly mobile, they evolved senses such as vision [8] and more complex nervous systems [9] to find food and evade predators. However, these new senses also empowered a sexual revolution, as they gave animals new tools for choosing sexual partners. Rather than hooking up randomly with the nearest mate, animals could now select mates based on visible cues of genetic quality such as body size, energy level, bright coloration and behavioural competence. By choosing the highest quality mates, they could produce higher quality offspring with lower mutation loads [10]. Such mate choice imposed selection on all of those quality cues to become larger, brighter and more conspicuous, amplifying them into true sexual ornaments: biological luxury goods such as the guppy''s tail and the peacock''s train that function mainly to impress and attract females [11]. These sexual ornaments evolved to have a complex genetic architecture, to capture a larger share of the genetic variation across individuals and to reveal mutation load more accurately [12].Ever since the Cambrian, the mating market for sexually reproducing animal species has been transformed to some degree into a consumerist fantasy world of conspicuous quality, status, fashion, beauty and romance. Individuals advertise their genetic quality and phenotypic condition through reliable, hard-to-fake signals or ‘fitness indicators'' such as pheromones, songs, ornaments and foreplay. Mates are chosen on the basis of who displays the largest, costliest, most precise, most popular and most salient fitness indicators. Mate choice for fitness indicators is not restricted to females choosing males, but often occurs in both sexes [13], especially in socially monogamous species with mutual mate choice such as humans [14].Thus, for 500 million years, animals have had to straddle two worlds in perpetual tension: natural selection and sexual selection. Each type of selection works through different evolutionary principles and dynamics, and each yields different types of adaptation and biodiversity. Neither fully dominates the other, because sexual attractiveness without survival is a short-lived vanity, whereas ecological competence without reproduction is a long-lived sterility. Natural selection shapes species to fit their geographical habitats and ecological niches, and favours efficiency in growth, foraging, parasite resistance, predator evasion and social competition. Sexual selection shapes each sex to fit the needs, desires and whims of the other sex, and favours conspicuous extravagance in all sorts of fitness indicators. Animal life walks a fine line between efficiency and opulence. More than 130,000 plant species also play the sexual ornamentation game, having evolved flowers to attract pollinators [15].The sexual selection world challenges the popular misconception that evolution is blind and dumb. In fact, as Darwin emphasized, sexual selection is often perceptive and clever, because animal senses and brains mediate mate choice. This makes sexual selection closer in spirit to artificial selection, which is governed by the senses and brains of human breeders. In so far as sexual selection shaped human bodies, minds and morals, we were also shaped by intelligent designers—who just happened to be romantic hominids rather than fictional gods [16].Thus, mate choice for genetic quality is analogous in many ways to consumer choice for brand quality [17]. Mate choice and consumer choice are both semi-conscious—partly instinctive, partly learned through trial and error and partly influenced by observing the choices made by others. Both are partly focused on the objective qualities and useful features of the available options, and partly focused on their arbitrary, aesthetic and fashionable aspects. Both create the demand that suppliers try to understand and fulfil, with each sex striving to learn the mating preferences of the other, and marketers striving to understand consumer preferences through surveys, focus groups and social media data mining.…single-celled organisms always had a problem with sex: they were not very good at choosing the sexual partners with the best genes…Mate choice and consumer choice can both yield absurdly wasteful outcomes: a huge diversity of useless, superficial variations in the biodiversity of species and the economic diversity of brands, products and packaging. Most biodiversity seems to be driven by sexual selection favouring whimsical differences across populations in the arbitrary details of fitness indicators, not just by naturally selected adaptation to different ecological niches [18]. The result is that within each genus, a species can be most easily identified by its distinct mating calls, sexual ornaments, courtship behaviours and genital morphologies [19], not by different foraging tactics or anti-predator defences. Similarly, much of the diversity in consumer products—such as shirts, cars, colleges or mutual funds—is at the level of arbitrary design details, branding, packaging and advertising, not at the level of objective product features and functionality.These analogies between sex and marketing run deep, because both depend on reliable signals of quality. Until recently, two traditions of signalling theory developed independently in the biological and social sciences. The first landmark in biological signalling theory was Charles Darwin''s analysis of mate choice for sexual ornaments as cues of good fitness and fertility in his book, The Descent of Man, and Selection in Relation to Sex (1871). Ronald Fisher analysed the evolution of mate preferences for fitness indicators in 1915 [20]. Amotz Zahavi proposed the ‘handicap principle'', arguing that only costly signals could be reliable, hard-to-fake indicators of genetic quality or phenotypic condition in 1975 [21]. Richard Dawkins and John Krebs applied game theory to analyse the reliability of animal signals, and the co-evolution of signallers and receivers in 1978 [22]. In 1990, Alan Grafen eventually proposed a formal model of the ‘handicap principle'' [23], and Richard Michod and Oren Hasson analysed ‘reliable indicators of fitness'' [24]. Since then, biological signalling theory has flourished and has informed research on sexual selection, animal communication and social behaviour.…new senses also empowered a sexual revolution […] Rather than hooking up randomly with the nearest mate, animals could now select mates based on visible cues of genetic quality…The parallel tradition of signalling theory in the social sciences and philosophy goes back to Aristotle, who argued that ethical and rational acts are reliable signals of underlying moral and cognitive virtues (ca 350–322 BC). Friedrich Nietzsche analysed beauty, creativity, morality and even cognition as expressions of biological vigour by using signalling logic (1872–1888). Thorstein Veblen proposed that conspicuous luxuries, quality workmanship and educational credentials act as reliable signals of wealth, effort and taste in The Theory of the Leisure Class (1899), The Instinct of Workmanship (1914) and The Higher Learning in America (1922). Vance Packard used signalling logic to analyse social class, runaway consumerism and corporate careerism in The Status Seekers (1959), The Waste Makers (1960) and The Pyramid Climbers (1962), and Ernst Gombrich analysed beauty in art as a reliable signal of the artist''s skill and effort in Art and Illusion (1977) and A Sense of Order (1979). Michael Spence developed formal models of educational credentials as reliable signals of capability and conscientiousness in Market Signalling (1974). Robert Frank used signalling logic to analyse job titles, emotions, career ambitions and consumer luxuries in Choosing the Right Pond (1985), Passions within Reason (1988), The Winner-Take-All-Society (1995) and Luxury Fever (2000).Evolutionary psychology and evolutionary anthropology have been integrating these two traditions to better understand many puzzles in human evolution that defy explanation in terms of natural selection for survival. For example, signalling theory has illuminated the origins and functions of facial beauty, female breasts and buttocks, body ornamentation, clothing, big game hunting, hand-axes, art, music, humour, poetry, story-telling, courtship gifts, charity, moral virtues, leadership, status-seeking, risk-taking, sports, religion, political ideologies, personality traits, adaptive self-deception and consumer behaviour [16,17,25,26,27,28,29].Building on signalling theory and sexual selection theory, the new science of evolutionary consumer psychology [30] has been making big advances in understanding consumer goods as reliable signals—not just signals of monetary wealth and elite taste, but signals of deeper traits such as intelligence, moral virtues, mating strategies and the ‘Big Five'' personality traits: openness, conscientiousness, agreeableness, extraversion and emotional stability [17]. These individual traits are deeper than wealth and taste in several ways: they are found in the other great apes, are heritable across generations, are stable across life, are important in all cultures and are naturally salient when interacting with mates, friends and kin [17,27,31]. For example, consumers seek elite university degrees as signals of intelligence; they buy organic fair-trade foods as signals of agreeableness; and they value foreign travel and avant-garde culture as signals of openness [17]. New molecular genetics research suggests that mutation load accounts for much of the heritable variation in human intelligence [32] and personality [33], so consumerist signals of these traits might be revealing genetic quality indirectly. If so, conspicuous consumption can be seen as just another ‘good-genes indicator'' favoured by mate choice.…sexual attractiveness without survival is a short-lived vanity, whereas ecological competence without reproduction is a long-lived sterilityIndeed, studies suggest that much conspicuous consumption, especially by young single people, functions as some form of mating effort. After men and women think about potential dates with attractive mates, men say they would spend more money on conspicuous luxury goods such as prestige watches, whereas women say they would spend more time doing conspicuous charity activities such as volunteering at a children''s hospital [34]. Conspicuous consumption by males reveals that they are pursuing a short-term mating strategy [35], and this activity is most attractive to women at peak fertility near ovulation [36]. Men give much higher tips to lap dancers who are ovulating [37]. Ovulating women choose sexier and more revealing clothes, shoes and fashion accessories [38]. Men living in towns with a scarcity of women compete harder to acquire luxuries and accumulate more consumer debt [39]. Romantic gift-giving is an important tactic in human courtship and mate retention, especially for men who might be signalling commitment [40]. Green consumerism—preferring eco-friendly products—is an effective form of conspicuous conservation, signalling both status and altruism [41].Findings such as these challenge traditional assumptions in economics. For example, ever since the Marginal Revolution—the development of economic theory during the 1870s—mainstream economics has made the ‘Rational Man'' assumption that consumers maximize their expected utility from their product choices, without reference to what other consumers are doing or desiring. This assumption was convenient both analytically—as it allowed easier mathematical modelling of markets and price equilibria—and ideologically in legitimizing free markets and luxury goods. However, new research from evolutionary consumer psychology and behavioural economics shows that consumers often desire ‘positional goods'' such as prestige-branded luxuries that signal social position and status through their relative cost, exclusivity and rarity. Positional goods create ‘positional externalities''—the harmful social side-effects of runaway status-seeking and consumption arms races [42].…biodiversity seems driven by sexual selection favouring whimsical differences […] Similarly […] diversity in consumer products […] is at the level of arbitrary design…These positional externalities are important because they undermine the most important theoretical justification for free markets—the first fundamental theorem of welfare economics, a formalization of Adam Smith''s ‘invisible hand'' argument, which says that competitive markets always lead to efficient distributions of resources. In the 1930s, the British Marxist biologists Julian Huxley and J.B.S. Haldane were already wary of such rationales for capitalism, and understood that runaway consumerism imposes social and ecological costs on humans in much the same way that runaway sexual ornamentation imposes survival costs and extinction risks on other animals [16]. Evidence shows that consumerist status-seeking leads to economic inefficiencies and costs to human welfare [42]. Runaway consumerism might be one predictable result of a human nature shaped by sexual selection, but we can display desirable traits in many other ways, such as green consumerism, conspicuous charity, ethical investment and through social media such as Facebook [17,43].Future work in evolutionary consumer psychology should give further insights into the links between sex, mutations, evolution and marketing. These links have been important for at least 500 million years and probably sparked the evolution of human intelligence, language, creativity, beauty, morality and ideology. A better understanding of these links could help us nudge global consumerist capitalism into a more sustainable form that imposes lower costs on the biosphere and yields higher benefits for future generations.? Open in a separate windowGeoffrey Miller  相似文献   

9.
10.
11.
EMBO J (2013) 32 23, 3017–3028 10.1038/emboj.2013.224; published online October182013Commensal gut bacteria benefit their host in many ways, for instance by aiding digestion and producing vitamins. In a new study in The EMBO Journal, Jones et al (2013) report that commensal bacteria can also promote intestinal epithelial renewal in both flies and mice. Interestingly, among commensals this effect is most specific to Lactobacilli, the friendly bacteria we use to produce cheese and yogurt. Lactobacilli stimulate NADPH oxidase (dNox/Nox1)-dependent ROS production by intestinal enterocytes and thereby activate intestinal stem cells.The human gut contains huge numbers of bacteria (∼1014/person) that play beneficial roles for our health, including digestion, building our immune system and competing with harmful microbes (Sommer and Backhed, 2013). Both commensal and pathogenic bacteria can elicit antimicrobial responses in the intestinal epithelium and also stimulate epithelial turnover (Buchon et al, 2013; Sommer and Backhed, 2013). In contrast to gut pathogens, relatively little is known about how commensal bacteria influence intestinal turnover. In a simple yet elegant study reported recently in The EMBO Journal, Jones et al (2013) show that among several different commensal bacteria tested, only Lactobacilli promoted much intestinal stem cell (ISC) proliferation, and it did so by stimulating reactive oxygen species (ROS) production. Interestingly, the specific effect of Lactobacilli was similar in both Drosophila and mice. In addition to distinguishing functional differences between species of commensals, this work suggests how the ingestion of Lactobacillus-containing probiotic supplements or food (e.g., yogurt) might support epithelial turnover and health.In both mammals and insects, ISCs give rise to intestinal enterocytes, which not only absorb nutrients from the diet but must also interact with the gut microbiota (Jiang and Edgar, 2012). The metazoan intestinal epithelium has developed conserved responses to enteric bacteria, for instance the expression of antimicrobial peptides (AMPs; Gallo and Hooper, 2012; Buchon et al, 2013), presumably to kill harmful bacteria while allowing symbiotic commensals to flourish. In addition to AMPs, intestinal epithelial cells use NADPH family oxidases to generate ROS that are used as microbicides (Lambeth and Neish, 2013). High ROS levels during enteric infections likely act non-discriminately against both commensals and pathogens, but controlled, low-level ROS can act as signalling molecules that regulate various cellular processes including proliferation (Lambeth and Neish, 2013). In flies, exposure to pathogenic Gram-negative bacteria has been reported to result in ROS (H2O2) production by an enzyme called dual oxidase (Duox; Ha et al, 2005). Duox activity in the fly intestine (and likely also the mammalian one) has recently been discovered to be stimulated by uracil secretion by pathogenic bacteria (Lee et al, 2013). In the mammalian intestine another enzyme, NADPH oxidase (Nox), has also been shown to produce ROS in the form of superoxide (O2), in this case in response to formylated bacterial peptides (Lambeth and Neish, 2013). A conserved role for Nox in the Drosophila intestinal epithelium had not until now been explored.Jones et al (2013) checked seven different commensal bacterial to see which would stimulate ROS production by the fly''s intestinal epithelium, and found that only one species, a Gram-positive Lactobacillus, could stimulate significant production of ROS in intestinal enterocytes. Five bacterial species were checked in mice or cultured intestinal cells, and again it was a Lactobacillus that generated the strongest ROS response. Although not all of the most prevalent enteric bacteria were assayed, those others that were—such as E. coli—induced only mild, barely detectable levels of ROS in enterocytes. Surprisingly, although bacteria pathogenic to Drosophila, like Erwinia caratovora, were expected to stimulate ROS production via Duox, Jones et al (2013) did not observe this using the ROS detecting dye hydrocyanine-Cy3, or a ROS-sensitive transgene reporter, Glutatione S-transferase-GFP, in flies. Further, Jones et al (2013) found that genetically suppressing Nox in either Drosophila or mice decreased ROS production after Lactobacillus ingestion. Consistent with the important role of Nox, Duox appeared not to be required for ROS production after Lactobacillus ingestion. In addition, Jones et al (2013) found that Lactobacilli also promoted DNA replication—a metric of cell proliferation and epithelial renewal—in the fly''s intestine, and that this was also ROS- and Nox-dependent. Again, the same relationship was found in the mouse small intestine. Together, these results suggest a conserved mechanism by which Lactobacilli can stimulate Nox-dependent ROS production in intestinal enterocytes and thereby promote ISC proliferation and enhance gut epithelial renewal.In the fly midgut, uracil produced by pathogenic bacteria can stimulate Duox-dependent ROS production, which is thought to act as a microbicide (Lee et al, 2013), and can also promote ISC proliferation (Buchon et al, 2009). However, Duox-produced ROS may also damage the intestinal epithelium itself and thereby promote epithelial regeneration indirectly through stress responses. In this disease scenario, ROS appears to be sensed by the stress-activated Jun N-terminal Kinase (JNK; Figure 1A), which can induce pro-proliferative cytokines of the Leptin/IL-6 family (Unpaireds, Upd1–3) (Buchon et al, 2009; Jiang et al, 2009). These cytokines activate JAK/STAT signalling in the ISCs, promoting their growth and proliferation, and accelerating regenerative repair of the gut epithelium (Buchon et al, 2009; Jiang et al, 2009). It is also possible, however, that low-level ROS, or specific types of ROS (e.g., H2O2) might induce ISC proliferation directly by acting as a signal between enterocytes and ISCs. Since commensal Lactobacillus stimulates ROS production via Nox rather than Duox, this might be a case in which a non-damaging ROS signal promotes intestinal epithelial renewal without stress signalling or a microbicidal effect (Figure 1B). However, Jones et al (2013) stopped short of ruling out a role for oxidative damage, cell death or stress signalling in the intestinal epithelium following colonization by Lactobacilli, and so these parameters must be checked in future studies. Perhaps even the friendliest symbiotes cause a bit of ‘healthy'' damage to the gut lining, stimulating it to refresh and renew. Whether damage-dependent or not, the stimulation of Drosophila ISC proliferation by commensals and pathogens alike appears to involve the same cytokine (Upd3; Buchon et al, 2009), and so some of the differences between truly pathogenic and ‘friendly'' gut microbes might be ascribed more to matters of degree than qualitative distinctions. Future studies exploring exactly how different types of ROS signals stimulate JNK activity, gut cytokine expression and epithelial renewal should be able to sort this out, and perhaps help us learn how to better manage the ecosystems in our own bellies. From the lovely examples reported by Jones et al (2013), an experimental back-and-forth between the Drosophila and mouse intestine seems an informative way to go.Open in a separate windowFigure 1Metazoan intestinal epithelial responses to commensal and pathogenic bacteria. (A) High reactive oxygen species (ROS) levels generated by dual oxidase (Duox) in response to uracil secretion by pathogenic bacteria. (B) Low ROS levels generated by NADPH oxidase (Nox) in response to commensal bacteria. In addition to acting as a microbiocide, ROS in flies may stimulate JNK signaling and cytokine (Upd 1–3) expression in enterocytes, thereby stimulating ISC proliferation and epithelial turnover or regeneration. Whether this stimulation required damage to or loss of enterocytes has yet to be explored.  相似文献   

12.
Assisted reproductive technologies enable subfertile couples to have children. But there are health risks attached for both mothers and children that need to be properly understood and managed.Assisted reproductive technology (ART) has become a standard intervention for couples with infertility problems, especially as ART is highly successful and overall carries low risks [1,2]. The number of infants born following ART has increased steadily worldwide, with more than 5,000,000 so far [3]. In industrialized countries, 1–4% of newborns have been conceived by using ART [4,5], probably owing to the fact that couples frequently delay childbearing until their late 30s, when fertility decreases in both men and women [2]. Considering the possibility that male fertility might be declining, as Richard Sharpe has discussed in this series [6], it is likely that ART will be even more widely used in the future. Yet, as the rate of ART and the total number of pregnancies has increased, it has become apparent that ART is associated with potential risks to the mother and fetus. The most commonly cited health problems pertain to multiple gestation pregnancies and multiple births. More recently, however, concerns about the risks of birth defects and genetic disorders have been raised. There are questions about whether the required manipulations and the artificial environments of gametes and embryos are potentially creating short- and long-term health risks in mothers and children by interfering with epigenetic reprogramming.Notwithstanding, ART represents a tremendous achievement in human reproductive medicine. The birth of Louise Brown, the first ‘test tube baby'' in 1978, was the result of the collaborative work of embryologist Robert Edwards and gynaecologist Patrick Steptoe [7]. This success was a culmination of many years of work at universities and clinics worldwide. An initial lack of support, as well as criticism from ethicists and the church, delayed the opening of the first in vitro fertilization (IVF) clinic in Bourn Hall near Cambridge until 1980. By 1986, 1,000 children conceived by IVF at Bourn Hall had been born [8]. In 2010, Edwards received the Nobel Prize in Medicine for the development of IVF. Regrettably, Steptoe had passed away in 1988 and could not share the honour.…as the rate of ART and the total number of pregnancies has increased, it has become apparent that ART is associated with potential risks to mother and fetusOver the next decades, many improvements in IVF procedures were made to reduce the risks of adverse effects and increase success rates, including controlled ovarian stimulation, timed ovulation induction, ultrasound-guided egg retrieval, cryopreservation of embryos and intracytoplasmic sperm injection (ICSI)—a technique in which a single sperm cell is injected into an oocyte using a microneedle. In addition, there were further improvements such as assisted hatching and in media composition, such as sequential media, which allow the in vitro culture of the embryo to reach the blastocyst stage [8].Current IVF procedures involve multiple steps including ovarian stimulation and monitoring, oocyte retrieval from the ovary, fertilization in vitro and embryo transfer to the womb. Whereas the first IVF cycles, including the conception of Louise Brown, used natural ovulatory cycles, which result in the retrieval of one or two oocytes, most IVF cycles performed today rely on controlled ovarian stimulation using injectable gonadotropins—follicle stimulating hormone and luteinizing hormone—in supraphysiological concentrations for 10–14 days, followed by injection of human chorionic gonadotropin (hCG) 38–40 h before egg retrieval to trigger ovulation. This updated protocol makes it possible to grow multiple follicles and to retrieve 10–20 oocytes in one IVF cycle, thereby increasing the number of eggs available for fertilization.Post-retrieval, the embryologist places an egg and sperm together in a test tube for fertilization. Alternatively, a single sperm cell can be injected into an egg by using ICSI. This procedure was initially developed for couples with poor sperm quality [9], but has become the predominant fertilization technique used in many IVF clinics worldwide [8]. The developing embryos are monitored by microscopy, and viable embryos are transferred into the woman''s womb for implantation. Louise Brown, as with many embryos today, was transferred three days after egg retrieval, at approximately the eight-cell stage. However, using sequential media, many clinics advocate culturing embryos until day five when they reach the blastocyst stage. The prolonged culture period allows self-selection of the most viable embryos for transfer and increases the chance of a viable pregnancy. Excess embryos can be cryopreserved and transferred at a later date by using a procedure known as frozen embryo transfer (FET). In this article we use the term ART to refer to IVF procedures with or without ICSI and FET.

Science & Society Series on Sex and Science

Sex is the greatest invention of all time: not only has sexual reproduction facilitated the evolution of higher life forms, it has had a profound influence on human history, culture and society. This series explores our attempts to understand the influence of sex in the natural world, and the biological, medical and cultural aspects of sexual reproduction, gender and sexual pleasure.Embryos can also be screened for chromosomal aneuploidies—missing or extra chromosomes—by preimplantation genetic diagnosis (PGD) when indicated and when available. PGD can also be used to test fertile couples at increased risk of genetic disorders. To perform PGD, a single cell is obtained from three-day-old embryos for molecular testing, for example sequencing for inherited monogenic disorders or fluorescent in situ hybridization for chromosomal abnormalities [8]. Only embryos with a normal chromosomal constitution, and without the genetic disorder in question, would then be transferred into the woman''s womb.Despite tremendous progress during the past three decades, people undertaking ART still face a considerable risk of failure to achieve parenthood. The rate of clinical pregnancies in Bourn Hall between 1980 and 1985 was 24% and 14% in women younger and older than 40 years, respectively [10]. The reported rates for clinical pregnancies and live births vary by country; the average delivery rate is 22.4%, 23.3% and 17.1% for IVF, ICSI and FET cycles, respectively [11]. According to the last Centers for Disease Control and Prevention report in 2009, the average live-birth rate was 35% per fresh ART cycle, although it sharply declines with age, from 45% among women younger than 35 years to 7% among women older than 42 years [5]. The reasons include poor response to ovarian stimulation, ovarian hyperstimulation syndrome and failure of eggs to fertilize. However, these failures occur in only a minority of patients and the success rate of egg retrieval and fertilization leading to embryo transfer is a remarkable 90% [12].Implantation remains the least understood process and is a key rate-limiting step in ART. Poor embryo quality is considered to be the main cause of implantation failure and it reflects a high incidence of chromosomal aneuploidies, which increases with maternal age [13]. One obvious solution to improve implantation rates is to transfer more embryos. However, this also increases the risk of multiple births, and related morbidity and mortality in newborns. An alternative approach is to select for good-quality embryos by culturing them to the blastocyst stage, because it seems that aneuploid embryos arrest by this stage and that blactocysts are more likely to have a normal chromosomal complement. There is ongoing research aimed at identifying viable embryos through PGD and metabolic profiling [13].Despite tremendous progress during the past three decades, people undertaking ART still face a considerable risk of failure to achieve parenthoodIt has also been suggested that failure to implant could be caused by the inability of the embryo to hatch out of a glycoprotein layer surrounding the embryo, known as the ‘zona pellucida''; this layer hardens if the embryo is cultured or frozen. Assisted hatching by rupturing the zona pellucida before embryo transfer does increase clinical pregnancy rates, especially for thawed embryos [13]. Another factor linked to the failure of implantation is endometrial receptivity. The endometrium consists of multi-layered mucosa cells in the inner wall of the uterus, which undergoes coordinated remodelling during the menstrual cycle and there is a specific time window when it is receptive to embryo implantation. Several research studies have identified molecular biomarkers of poor endometrial receptivity, showing that prostaglandins, cell adhesion molecules, mucins and cytokines are important [13].When it comes to health risks for mothers and infants, the use of ART increases the risk of multiple births, including higher rates of caesarian sections, prematurity, low birth weight, infant death and disability. More recently, concerns regarding elevated risks of birth defects, genetic abnormalities, neurodevelopmental disorders and imprinting disorders have been reported; however, not all are substantiated. There are still many unanswered questions regarding the potential short- and long-term health risks of ART for women and children, and there are tremendous challenges in studying the safety of ART procedures. Apart from the subset of individuals undergoing ART for social reasons—single parents or same sex couples—most patients are subfertile couples. Subfertility, defined as a failure to conceive naturally after 12 months of unprotected intercourse, affects 8–20% of couples [2], and it can occur for a variety of unknown or known reasons including maternal factors—endocrine, hormonal, endometriosis and blocked fallopian tubes—and paternal factors such as spermatogenesis abnormalities.Most studies have assessed the risks of ART by comparing the outcomes of ART-conceived pregnancies to naturally conceived pregnancies. There is emerging evidence that underlying maternal or paternal subfertility might be an important factor in obstetric, neonatal and childhood outcomes in the ART population. Therefore, to determine the specific health risks associated with the ART process itself, the outcomes of ART-conceived pregnancies should be assessed in comparison with naturally conceived pregnancies in subfertile parents, which is methodologically difficult. Alternatively, studying the health risks of ART in fertile couples—for instance, same-sex couples and couples at risk of genetic disorders—would be informative, but the number of such couples is relatively small.Women who undergo ART are at risk of ovarian hyperstimulation syndrome (OHSS). OHSS is a complication of ovulation induction resulting in enlargement of ovaries and retention of fluids leading to various secondary complications, which normally resolve within two weeks, but can persist if pregnancy occurs. Patients with OHSS can be offered embryo cryopreservation and frozen embryo transfer when symptoms resolve. Moderate forms of OHSS occur in 5% of patients undergoing ART; 2% of patients require hospitalization. Death occurs with an incidence of approximately 3 per 100,000 ART cycles [14]. OHSS is predominantly caused by human chorionic gonadotropin injection used for inducing final oocyte maturation and ovulation. Research is focused on optimizing alternative stimulation protocols [14].The use of supraphysiological concentrations of hormones during ovarian stimulation has also raised concerns that ART can increase cancer risks linked to hormonal fluctuations. These include breast, ovarian, endometrial, cervical and colon cancers, as well as melanoma. Studies evaluating the risks of cervical cancers, colon cancers and melanoma have not demonstrated increased risks for women undergoing ART [1]. The data for breast, ovarian and endometrial cancer is more complex, however, and more research is required to conclusively determine whether there is an increased risk.The perinatal and obstetric risks of ART are most significantly influenced by multiple pregnancies. These are at a more than 60% risk of low birth weight or premature delivery [2], and related risks of pregnancy complications such as gestational diabetes, abnormal placentation and hypertensive disorders [1]. Multiple pregnancies occur in 1% of naturally conceived pregnancies and 25–50% of ART pregnancies, owing to multiple embryo transfer. In the Western world, about 30–50% of all twin pregnancies result from ART [2]. Whilst double or triple embryo transfer is still common, the development of cryopreservation techniques and extended blastocyst culture has increased the use of single embryo transfer (SET), especially for younger women. Many European countries and the province of Quebec, in Canada, where ART is publicly funded, have adopted a policy of SET, which has dramatically decreased the incidence of multiple pregnancies. In Belgium and Quebec, SET policies have reduced multiple pregnancies from 19% to 3% and from 27% to 6%, respectively. It has been argued that SET results in a lower live-birth rate than a double-embryo transfer, but this is almost completely overcome by an additional single frozen embryo cycle [2].…there are tremendous challenges in studying the safety of ART proceduresThe question of whether ART increases the risks of pregnancy complications, including prematurity and low birth weight in singletons, remains unresolved; several studies have found an increased risk, but others have not replicated these findings [1,2]. It has been suggested that the fertility history of patients undergoing ART is an important factor, as there is an association between the length of time to conception and prematurity and birth weight [15]. Prematurity and low birth weight are also known to be associated with long-term health effects, including adult onset coronary artery disease, hypertension, obesity and type 2 diabetes [16,17].Various studies have also reported a higher incidence of congenital anomalies in ART-conceived children, with a suggested 30% increase of malformations [2]. However, this is another risk that might be attributable to parental subfertility, as a study comparing children conceived by ART to subfertile parents and children conceived naturally to subfertile parents did not find any significant difference in the congenital anomaly rate [2]. Findings from another study of the risks of birth defects in children conceived naturally to women with and without a history of subfertility compared with children conceived with the assistance of ART also suggest that it is subfertility, rather than ART, that is associated with an increased risk of birth defects [18].Several studies reported an increased risk of cerebral palsy and other neurological abnormalities in children conceived by ART [2]. But again, these findings are mainly attributed to complications resulting from multiple pregnancies including prematurity and low birth weight. The increased utilization of SET is therefore expected to result in fewer multiple pregnancies, which should result in a concomitant decrease in neurological complications. Further evidence that neurological complications in ART children are not exclusively related to ART came from studies that have assessed neurodevelopmental outcomes, such as locomotion, cognition, language and behavioural development of ART children in comparison with naturally conceived children. These analyses did not reveal any differences when adjusted for confounding factors of low birth weight and prematurity. In a similar vein, numerous studies have investigated whether there is an increased incidence of autism in ART-conceived children, but these have been inconclusive [19].There are potential concerns regarding the fertility of ART children. However, this requires future studies as most of this population is younger than 30 years of age. There is some evidence that boys conceived through ICSI have an increased rate of genital anomalies [2] and that males with severe infertility, such as low sperm counts, are more likely to carry chromosomal abnormalities, which could be passed on to their children conceived through ICSI [15].It has also been suggested that there might be an increased risk of cancers in ART-conceived offspring. Although multiple studies have identified no such risk, a large Swedish study reported a marginally increased risk of cancer, including haematologic, eye, nervous system, solid tumours and histiocytosis [2]. Similarly to other ART-related adverse health outcomes, it has been suggested that the increased risk of cancer could be attributed to prematurity, a recognized risk factor for cancer, rather than to the ART procedure itself. Further long-term studies are required to determine if there is truly an increased risk of adult cancers in ART offspring.…there remain unanswered questions about both the health risks associated with ART and the potential mechanisms that could account for these findingsOne thing is clear from the available evidence to date: there remain unanswered questions about both the health risks associated with ART and the potential mechanisms that could account for these findings. One possible explanation is that the exposure of gametes and preimplantation embryos to the various steps of ART might affect growth and development of offspring through dysregulation of epigenetic pathways [20]. In addition, there is evidence that genetic and epigenetic alterations might be inherited from the gametes of subfertile parents, which would reinforce assertions that subfertility itself might play a role in ART-related health outcomes [1,20].Epigenetics refers to heritable changes in gene expression without alterations to the underlying DNA sequence. DNA methylation and modifications of histones are epigenetic modifications that determine active against repressive conformation of chromatin structure, thereby regulating gene expression and driving essential processes such as embryonic development, fetal organ development, cell differentiation and tissue-specific gene expression [21]. Genomic imprinting is a type of epigenetic gene regulation that uses epigenetic marks to silence specifically one of the parental alleles. There are approximately 100 known imprinted genes in humans [22]. Most imprinted genes are found in clusters across the genome and are regulated by parent-specific DNA methylation and histone modification marks at cis-acting imprinting centres, as well as non-coding RNAs. Most of the known imprinted genes have functions related to growth and behaviour; disruption of the normally programmed parental expression of imprinted genes can therefore result in disorders related to growth and neurodevelopment.Gametogenesis and embryogenesis are important stages of mammalian development that require genome-wide epigenetic reprogramming. During spermatogenesis, protamines replace most histone proteins to create a highly compacted DNA. Establishment of DNA methylation imprints at paternally methylated imprinting centres is complete in males at the time of birth. In females, the establishment of maternally methylated imprinting centres begins during puberty and is almost complete in ovulated oocytes. After fertilization, the paternal genome undergoes rapid active DNA demethylation in which protamines are replaced by histones, whilst the maternal genome is passively demethylated, so that DNA methylation patterns are lost through cell divisions. Although, the whole genome undergoes demethylation, parent-specific DNA methylation is maintained at imprinting centres. Subsequently, the genome is remethylated and cell-type-specific epigenetic patterns are established as embryonic development proceeds. The parent-specific DNA methylation at imprinting centres is maintained in somatic cells, but it is erased and re-established in the gametes starting a new cycle of imprinting (Fig 1; [23]). As the establishment and maintenance of imprinting marks coincides in timing with important stages of ART, such as oocyte maturation under supraphysiological hormone concentrations and embryo culture, it has been proposed that ART can lead to imprinting errors [24].Open in a separate windowFigure 1Life cycle of genomic imprinting and assisted reproductive technology. Erasure, re-establishment and maintenance of genomic imprinting occur during gametogenesis and preimplantation embryo development. Blue and red solid lines show paternal and maternal methylation at imprinting centres through gametogenesis and early stages of preimplantation development. Imprinting marks are erased at early stages of gametogenesis. Re-establishment of imprinting occurs throughout gametogenesis, but finishes much later in oocytes compared with sperm. During preimplantation development, both maternal and paternal imprinting marks are maintained whilst the rest of the genome is demethylated. The paternal genome is demethylated rapidly and actively (dashed blue line) whilst the maternal genome is demethylated at a slower rate passively through cell division (dashed red line). Various steps of assisted reproductive technology such as ovarian stimulation, ovulation induction, gamete and embryo manipulation and culturing create unusual environments for gametes and embryos and thus, can interfere with proper establishment of imprinting marks in oocytes or maintenance of imprinting marks in embryos. Subfertility can be associated with epigenetic errors in imprinting erasure and/or establishment in both oocytes and sperm. Adapted from [23].In 2001, the first evidence that genomic imprinting can be perturbed during ART procedures came from studying sheep fetuses derived from in vitro cultured embryos that presented with large offspring syndrome (LOS; [25]). LOS occurs sporadically in cattle and sheep conceived by IVF and is characterized by a 20–30% increase in birth weight frequently accompanied by congenital anomalies and placental dysfunction [24]. Owing to phenotypic similarities of LOS to the human overgrowth disorder Beckwith–Wiedemann syndrome (BWS), which is caused by the dysregulation of gene expression within an imprinted cluster on chromosome 11p15.5, the authors hypothesized that genes from the orthologous cluster in sheep or a closely related pathway could be dysregulated in LOS. They tested expression of the insulin-like growth factor 2 (IGF2) gene known to be overexpressed in BWS, and the IGF2R receptor gene, which is involved in clearance of IGF2 from the circulation. IGF2R is imprinted in sheep but not in humans. In sheep with LOS, no differences for IGF2 were found, but reduced expression of IGF2R was observed after loss of DNA methylation at the imprinting centre for this gene [25].In the following decade, several studies provided further evidence that children conceived by ART might be at increased risk of imprinting disorders. The strongest case has been made for BWS and Angelman syndrome. BWS is the most common human overgrowth syndrome characterized by prenatal and postnatal overgrowth, congenital anomalies and tumour predisposition [26]. Angelman syndrome is a neurodevelopmental disorder characterized by microcephaly, severe intellectual disability and a unique behavioural profile including frequent laughter, smiling and excitability [27]. Multiple case reports from various countries indicate an increased frequency of BWS and Angelman syndrome in ART children (3–10-fold) compared with the general population. However, two cohort studies failed to replicate this association [28]. The low incidence of both BWS (1 in 13,700) and Angelman syndrome (1 in 15,000) in the general population [28] makes epidemiological studies difficult—the two cohort studies reported 2,492 and 6,052 ART children, respectively, and are probably underpowered to detect an increased risk of BWS and Angelman syndrome. However, even if there might be increased relative risks for these syndromes in ART children, the absolute risks in this population remain low.The molecular causes of BWS and Angelman syndrome are heterogeneous. They include genomic (deletion, uniparental disomy and gene mutation) and epigenetic (loss of imprinting due to aberrant DNA methylation) alterations at imprinted gene clusters on chromosomes 11p5.5 and 15q11–q13, respectively. These alterations occur with specific frequencies for each of the two disorders [26,27]. Results of molecular testing in children with these syndromes and conceived using ART, reveal an excess of epigenetic compared with genetic molecular alterations. For example, loss of DNA methylation at imprinting centre 2 occurs in about 50% of BWS cases in the general population, whereas several studies found loss of DNA methylation at imprinting centre 2 in 96% (27/28) of BWS ART-conceived children. In Angelman syndrome, approximately 3% of cases in the general population have loss of methylation at 15q11–13, whereas 5 out of 19 (26%) Angelman syndrome children conceived by ART or naturally by parents with a history of subfertility had loss of DNA methylation at 15q11–13 (Fig 2).Open in a separate windowFigure 2Enrichment of epigenetic alterations in Beckwith–Wiedemann syndrome and Angelman syndrome after assisted reproductive technology. Loss of methylation (LOM) at imprinting centre 2 (IC2) on chromosome 11p15.5 contributes to 50% of Beckwith–Wiedemann syndrome (BWS) cases in the general population, whereas LOM at IC2 is found in 27 out of 28 cases (96%) in the BWS assisted reproductive technology (ART) population, which represents a 1.9-fold enrichment of this epigenetic defect. For Angelman syndrome (AS), methylation disruption at the 15q11–q13 imprinting centre contributes to 3% of AS cases, and in the AS ART and subfertility population it was found in 5 out of 19 cases (26%; eight fold enrichment). Data from the following publications were used for these calculations, BWS [31,32,33,34,35] AS [35,36].The data for loss of DNA methylation in Angelman syndrome cases conceived naturally by subfertile parents highlights the fact that epigenetic alterations could, at least in part, result from underlying parental subfertility. Indeed, several studies have shown that abnormalities of spermatogenesis, such as oligospermia (low sperm concentration), low sperm motility or abnormal sperm morphology are associated with altered DNA methylation at imprinted loci. These occur in both maternal and paternal alleles of imprinting centres in sperm and could be transmitted to offspring conceived by ART [26]. One study of chromosomally normal fetuses spontaneously aborted at six to nine weeks of gestation found that DNA methylation alterations at imprinted loci were sometimes inherited from sperm. Thus, it is possible that this dysregulation of imprinting in male gametes might be one cause of the association between imprinting disorders and ART.Studies of other known imprinted syndromes, such as Prader–Willi syndrome, Russell–Silver syndrome, maternal and paternal uniparental disomy of chromosome 14, pseudohypoparathyroidism type 1b and transient neonatal diabetes mellitus, have either not demonstrated an association with ART or have been inconclusive owing to their small size [29]. A link has also been suggested between ART and the newly defined ‘multiple maternal hypomethylation syndrome'', which clinically presents either as BWS or transient neonatal diabetes mellitus, and is associated with loss of DNA methylation at multiple maternally methylated imprinting centres; loss of methylation at paternal imprinting centres has not been reported so far. Thus, human imprinting disorders that have been observed with increased relative frequency in ART offspring are confined to loss of DNA methylation at maternally methylated imprinting centres, similar to epimutations of IGF2R in LOS. One could propose that ART has a greater impact on female than male gametes, as the eggs are subjected to more environmental exposures—supraphysiological doses of hormones—and more manipulation than the sperm. However, studies of mouse in vitro cultured embryos and ART-exposed human and mouse gametes suggest that ART can also be associated with either loss or gain of DNA methylation on both maternal and paternal alleles [23].Mouse models are a valuable method to investigate which stages of ART procedures can disrupt normal imprinting patterns. The advantage of using mouse models is the ability to investigate each of the parameters of ART—ovulation stimulation and embryo culturing—separately and at different stages of development. Furthermore, mouse models allow investigators to alter ART parameters, such as concentration of hormones or media for embryo culturing. Most importantly, studies in animal models have shown that ART procedures without the confounding factor of subfertility do have a negative impact on imprint regulation [23].The exposure of maturing oocytes from mice to abnormally high doses of gonadotropins has been suggested to alter imprint establishment. Yet, studies performed directly on superovulated oocytes are inconclusive, as not all of them have demonstrated increased rates of DNA methylation errors at imprint centres compared with spontaneously ovulated oocytes. Interestingly, studies of DNA methylation in mouse blastocysts harvested from superovulated mothers identified an increased rate of DNA methylation errors at imprint centres. This included loss of DNA methylation at the paternally methylated H19—the imprinting centre on human chromosome 11 and mouse chromosome 7 implicated in BWS and the related undergrowth Russell–Silver syndrome. It suggests that superovulation also impairs imprinting maintenance; probably by affecting the ability of the oocyte to synthesize and store sufficient maternal factors (RNA and proteins; [23]). In support of this hypothesis, four maternal effect proteins have been previously identified that are involved in imprinting maintenance in preimplantation embryos. It was also found that imprint errors arise in blastocysts in a dose-dependent manner—higher doses of hormones resulted in DNA methylation errors in a larger number of embryos [23].As the establishment and maintenance of imprinting marks coincides in timing with important stages of ART […] it has been proposed that ART can lead to imprinting errorsAnother factor that might contribute to imprinting errors is the micromanipulation of gametes during IVF and ICSI procedures. Evidence supporting this hypothesis includes the observation in mouse models that a higher number of IVF embryos—resulting from superovulation alone or superovulation and embryo culturing—have aberrant H19 DNA methylation compared with in vivo conceived embryos [23]. Media with varying compositions are used in ART clinics, and whilst all of the media are suboptimal for normal maintenance of all DNA imprints in mouse embryos, the number of embryos with aberrant DNA methylation at imprinting centres varies depending on the media [23]. Interestingly, it was also found that embryos with faster rates of development are more prone to loss of DNA methylation at imprinting centres [23].Though it is not yet clear how these findings relate to ART in humans, the mouse research is crucial for informing human studies about which variables should be addressed to optimize the safety and efficacy of ART procedures. Apart from ART itself, it has been shown that compromised fertility in mice results in loss or delay of DNA methylation acquisition in one of three tested imprinted genes. The compromised fertility is induced by genetic manipulation of a gene involved in communication between oocytes and surrounding follicular cells, which is crucial for proper oocyte maturation. The results suggest that the observed loss of DNA methylation could be caused by impaired transport of metabolites from follicular cells to oocytes, which is important for imprint establishment [23].Data linking dysregulation of imprinted loci and ART is limited to several imprinted gene clusters associated with clinically recognizable syndromes. However, there are more genes in the human genome that have been discovered to be, or are predicted to be, imprinted [22] but are not yet known to be associated with clinical phenotypes. Potentially, ART can lead to dysregulation of these imprinted genes, which might be another, as yet unrecognized factor contributing to neonatal and long-term health problems of ART-conceived children. At this point, it is also not clear whether epigenetic disruption during ART is limited to imprinted genes or has more global effects on the genome. The data for genome-wide DNA methylation analysis are limited in both human and mouse to individuals with no apparent disease phenotype. So far, these data have been inconclusive [23,28].One could propose that ART has a greater impact on female than male gametes, as the eggs are subjected to more environmental exposures […] and more manipulation than the spermDespite significant advances in the efficacy and success of ART procedures during the past few decades, the health risks, especially related to long-term outcomes in ART-conceived children, remain poorly understood. Moreover, the phenomena known as ‘fetal programming''—when maternal and in utero exposures can lead to various adult onset disease susceptibilities—have been suggested to be transmissible to the next generations, probably through epigenetic mechanisms [30]. In the case of ART procedures, the effect of ‘unusual'' environments during gametogenesis and early embryonic development on adult-onset disease and trans-generational inheritance is still not clear. Additional research is needed to elucidate the effects of ART on genome-wide epigenetic patterns and their link to human disease. As ART will continue to be an important medical intervention and the number of children born with the help of ART procedures will probably continue to rise in the future, it is crucial to understand the associated health risks and underlying molecular mechanisms of these technologies. This will increase the safety of this intervention and enable couples using ART to be fully informed regarding both present and future health-related risks.? Open in a separate windowDaria GrafodatskayaOpen in a separate windowCheryl CytrynbaumOpen in a separate windowRosanna Weksberg  相似文献   

13.
How easy is it to acquire an organelle? How easy is it to lose one? Michael Gray considers the latest evidence in this regard concerning the chromalveolates.How easy is it to acquire an organelle? How easy is it to lose one? These questions underpin the current debate about the evolution of the plastid—that is, chloroplast—the organelle of photosynthesis in eukaryotic cells.The origin of the plastid has been traced to an endosymbiosis between a eukaryotic host cell and a cyanobacterial symbiont, the latter gradually ceding genetic control to the former through endosymbiotic gene transfer (EGT). The resulting organelle now relies for its biogenesis and function on the expression of a small set of genes retained in the shrunken plastid genome, as well as a much larger set of transferred nuclear genes encoding proteins synthesized in the cytosol and imported into the organelle.This scenario accounts for the so-called primary plastids in green algae and their land plant relatives, in red algae and in glaucophytes, which together comprise Plantae (or Archaeplastida)—one of five or six recognized eukaryotic supergroups (Adl et al, 2005). In other algal types, plastids are ‘second-hand''—they have been acquired not by taking up a cyanobacterium, but by taking up a primary-plastid-containing eukaryote (sometimes a green alga, sometimes a red alga) to produce secondary plastids. In most of these cases, all that remains of the eukaryotic symbiont is its plastid; the genes coding for plastid proteins have moved from the endosymbiont to the host nucleus. A eukaryotic host—which may or may not itself have a plastid—might also take up a secondary-plastid symbiont (generating tertiary plastids), or a secondary-plastid host might take up a primary-plastid symbiont. You get the picture: plastid evolution is complicated!Several excellent recent reviews present expanded accounts of plastid evolution (Reyes-Prieto et al, 2007; Gould et al, 2008; Archibald, 2009; Keeling, 2009). Here, I focus on one particular aspect of plastid evolutionary theory, the ‘chromalveolate hypothesis'', proposed in 1999 by Tom Cavalier-Smith (1999).The chromalveolate hypothesis seeks to explain the origin of chlorophyll c-containing plastids in several eukaryotic groups, notably cryptophytes, alveolates (ciliates, dinoflagellates and apicomplexans), stramenopiles (heterokonts) and haptophytes—together dubbed the ‘chromalveolates''. The plastid-containing members of this assemblage are mainly eukaryotic algae with secondary plastids that were acquired through endosymbiosis with a red alga. The question is: how many times did such an endosymbiosis occur within the chromalveolate grouping?A basic tenet of the chromalveolate hypothesis is that the evolutionary conversion of an endosymbiont to an organelle should be an exceedingly rare event, and a hard task for a biological system to accomplish, because the organism has to ‘learn'' how to target a large number of nucleus-encoded proteins—the genes of many of which were acquired by EGT—back into the organelle. Our current understanding of this targeting process is detailed in the reviews cited earlier. Suffice it to say that the evolutionary requirements appear numerous and complex—sufficiently so that the chromalveolate hypothesis posits that secondary endosymbiosis involving a red alga happened only once, in a common ancestor of the various groups comprising the chromalveolates.Considerable molecular and phylogenetic data have been marshalled over the past decade in support of the chromalveolate hypothesis; however, no single data set specifically unites all chromalveolates, even though there is compelling evidence for various subgroup relationships (Keeling, 2009). Moreover, within the proposed chromalveolate assemblage, plastid-containing lineages are interspersed with plastid-lacking ones—for example, ciliates in the alveolates, and oomycetes such as Phytophthora in the stramenopiles. The chromalveolate hypothesis rationalizes such interspersion by assuming that the plastid was lost at some point during the evolution of the aplastidic lineages. The discovery in such aplastidic lineages of genes of putatively red algal origin, and in some cases suggestive evidence of a non-photosynthetic plastid remnant, would seem to be consistent with this thesis, although these instances are still few and far between.In this context, two recent papers are notable in that the authors seek to falsify, through rigorous testing, several explicit predictions of the chromalveolate hypothesis—and in both cases they succeed in doing so. Because molecular phylogenies have failed to either robustly support or robustly disprove the chromalveolate hypothesis, Baurain et al (2010) devised a phylogenomic falsification of the chromalveolate hypothesis that does not depend on full resolution of the eukaryotic tree. They argued that if the chlorophyll c-containing chromalveolate lineages all derive from a single red algal ancestor, then similar amounts of sequence from the three compartments should allow them to recover chromalveolate monophyly in all cases. The statistical support levels in their analysis refuted this prediction, leading them to “reject the chromalveolate hypothesis as falsified in favour of more complex evolutionary scenarios involving multiple higher order eukaryote–eukaryote endosymbioses”.In another study, Stiller et al (2009) applied statistical tests to several a priori assumptions relating to the finding of genes of supposed algal origin in the aplastidic chromalveolate taxon Phytophthora. These authors determined that the signal from these genes “is inconsistent with the chromalveolate hypothesis, and better explained by alternative models of sequence and genome evolution”.So, is the chromalveolate hypothesis dead? These new studies are certainly the most serious challenge yet. Additional data, including genome sequences of poorly characterized chromalveolate lineages, will no doubt augment comparative phylogenomic studies aimed at evaluating the chromalveolate hypothesis—which these days is looking decidedly shaky.  相似文献   

14.
The temptation to silence dissenters whose non-mainstream views negatively affect public policies is powerful. However, silencing dissent, no matter how scientifically unsound it might be, can cause the public to mistrust science in general.Dissent is crucial for the advancement of science. Disagreement is at the heart of peer review and is important for uncovering unjustified assumptions, flawed methodologies and problematic reasoning. Enabling and encouraging dissent also helps to generate alternative hypotheses, models and explanations. Yet, despite the importance of dissent in science, there is growing concern that dissenting voices have a negative effect on the public perception of science, on policy-making and public health. In some cases, dissenting views are deliberately used to derail certain policies. For example, dissenting positions on climate change, environmental toxins or the hazards of tobacco smoke [1,2] seem to laypeople as equally valid conflicting opinions and thereby create or increase uncertainty. Critics often use legitimate scientific disagreements about narrow claims to reinforce the impression of uncertainty about general and widely accepted truths; for instance, that a given substance is harmful [3,4]. This impression of uncertainty about the evidence is then used to question particular policies [1,2,5,6].The negative effects of dissent on establishing public polices are present in cases in which the disagreements are scientifically well-grounded, but the significance of the dissent is misunderstood or blown out of proportion. A study showing that many factors affect the size of reef islands, to the effect that they will not necessarily be reduced in size as sea levels rise [7], was simplistically interpreted by the media as evidence that climate change will not have a negative impact on reef islands [8].In other instances, dissenting voices affect the public perception of and motivation to follow public-health policies or recommendations. For example, the publication of a now debunked link between the measles, mumps and rubella vaccine and autism [9], as well as the claim that the mercury preservative thimerosal, which was used in childhood vaccines, was a possible risk factor for autism [10,11], created public doubts about the safety of vaccinating children. Although later studies showed no evidence for these claims, doubts led many parents to reject vaccinations for their children, risking the herd immunity for diseases that had been largely eradicated from the industrialized world [12,13,14,15]. Many scientists have therefore come to regard dissent as problematic if it has the potential to affect public behaviour and policy-making. However, we argue that such concerns about dissent as an obstacle to public policy are both dangerous and misguided.Whether dissent is based on genuine scientific evidence or is unfounded, interested parties can use it to sow doubt, thwart public policies, promote problematic alternatives and lead the public to ignore sound advice. In response, scientists have adopted several strategies to limit these negative effects of dissent—masking dissent, silencing dissent and discrediting dissenters. The first strategy aims to present a united front to the public. Scientists mask existing disagreements among themselves by presenting only those claims or pieces of evidence about which they agree [16]. Although there is nearly universal agreement among scientists that average global temperatures are increasing, there are also legitimate disagreements about how much warming will occur, how quickly it will occur and the impact it might have [7,17,18,19]. As presenting these disagreements to the public probably creates more doubt and uncertainty than is warranted, scientists react by presenting only general claims [20].A second strategy is to silence dissenting views that might have negative consequences. This can take the form of self-censorship when scientists are reluctant to publish or publicly discuss research that might—incorrectly—be used to question existing scientific knowledge. For example, there are genuine disagreements about how best to model cloud formation, water vapour feedback and aerosols in general circulation paradigms, all of which have significant effects on the magnitude of global climate change predictions [17,19]. Yet, some scientists are hesitant to make these disagreements public, for fear that they will be accused of being denialists, faulted for confusing the public and policy-makers, censured for abating climate-change deniers, or criticized for undermining public policy [21,22,23,24].…there is growing concern that dissenting voices can have a negative effect on the public perception of science, on policy-making and public healthAnother strategy is to discredit dissenters, especially in cases in which the dissent seems to be ideologically motivated. This could involve publicizing the financial or political ties of the dissenters [2,6,25], which would call attention to their probable bias. In other cases, scientists might discredit the expertise of the dissenter. One such example concerns a 2007 study published in the Proceedings of the National Academy of Sciences USA, which claimed that cadis fly larvae consuming Bt maize pollen die at twice the rate of flies feeding on non-Bt maize pollen [26]. Immediately after publication, both the authors and the study itself became the target of relentless and sometimes scathing attacks from a group of scientists who were concerned that anti-GMO (genetically modified organism) interest groups would seize on the study to advance their agenda [27]. The article was criticized for its methodology and its conclusions, the Proceedings of the National Academy of Sciences USA was criticized for publishing the article and the US National Science Foundation was criticized for funding the study in the first place.Public policies, health advice and regulatory decisions should be based on the best available evidence and knowledge. As the public often lack the expertise to assess the quality of dissenting views, disagreements have the potential to cast doubt over the reliability of scientific knowledge and lead the public to question relevant policies. Strategies to block dissent therefore seem reasonable as a means to protect much needed or effective health policies, advice and regulations. However, even if the public were unable to evaluate the science appropriately, targeting dissent is not the most appropriate strategy to prevent negative side effects for several reasons. Chiefly, it contributes to the problems that the critics of dissent seek to address, namely increasing the cacophony of dissenting voices that only aim to create doubt. Focusing on dissent as a problematic activity sends the message to policy-makers and the public that any dissent undermines scientific knowledge. Reinforcing this false assumption further incentivizes those who seek merely to create doubt to thwart particular policies. Not surprisingly, think-tanks, industry and other organizations are willing to manufacture dissent simply to derail policies that they find economically or ideologically undesirable.Another danger of targeting dissent is that it probably stifles legitimate crucial voices that are needed for both advancing science and informing sound policy decisions. Attacking dissent makes scientists reluctant to voice genuine doubts, especially if they believe that doing so might harm their reputations, damage their careers and undermine prevailing theories or policies needed. For instance, a panel of scientists for the US National Academy of Sciences, when presenting a risk assessment of radiation in 1956, omitted wildly different predictions about the potential genetic harm of radiation [16]. They did not include this wide range of predictions in their final report precisely because they thought the differences would undermine confidence in their recommendations. Yet, this information could have been relevant to policy-makers. As such, targeting dissent as an obstacle to public policy might simply reinforce self-censorship and stifle legitimate and scientifically informed debate. If this happens, scientific progress is hindered.Second, even if the public has mistaken beliefs about science or the state of the knowledge of the science in question, focusing on dissent is not an effective way to protect public policy from false claims. It fails to address the presumed cause of the problem—the apparent lack of understanding of the science by the public. A better alternative would be to promote the public''s scientific literacy. If the public were educated to better assess the quality of the dissent and thus disregard instances of ideological, unsupported or unsound dissent, dissenting voices would not have such a negative effect. Of course, one might argue that educating the public would be costly and difficult, and that therefore, the public should simply listen to scientists about which dissent to ignore and which to consider. This is, however, a paternalistic attitude that requires the public to remain ignorant ‘for their own good''; a position that seems unjustified on many levels as there are better alternatives for addressing the problem.Moreover, silencing dissent, rather than promoting scientific literacy, risks undermining public trust in science even if the dissent is invalid. This was exemplified by the 2009 case of hacked e-mails from a computer server at the University of East Anglia''s Climate Research Unit (CRU). After the selective leaking of the e-mails, climate scientists at the CRU came under fire because some of the quotes, which were taken out of context, seemed to suggest that they were fudging data or suppressing dissenting views [28,29,30,31]. The stolen e-mails gave further ammunition to those opposing policies to reduce greenhouse emissions as they could use accusations of data ‘cover up'' as proof that climate scientists were not being honest with the public [29,30,31]. It also allowed critics to present climate scientists as conspirators who were trying to push a political agenda [32]. As a result, although there was nothing scientifically inappropriate revealed in the ‘climategate'' e-mails, it had the consequence of undermining the public''s trust in climate science [33,34,35,36].A significant amount of evidence shows that the ‘deficit model'' of public understanding of science, as described above, is too simplistic to account correctly for the public''s reluctance to accept particular policy decisions [37,38,39,40]. It ignores other important factors such as people''s attitudes towards science and technology, their social, political and ethical values, their past experiences and the public''s trust in governmental institutions [41,42,43,44]. The development of sound public policy depends not only on good science, but also on value judgements. One can agree with the scientific evidence for the safety of GMOs, for instance, but still disagree with the widespread use of GMOs because of social justice concerns about the developing world''s dependence on the interests of the global market. Similarly, one need not reject the scientific evidence about the harmful health effects of sugar to reject regulations on sugary drinks. One could rationally challenge such regulations on the grounds that informed citizens ought to be able to make free decisions about what they consume. Whether or not these value judgements are justified is an open question, but the focus on dissent hinders our ability to have that debate.Focusing on dissent as a problematic activity sends the message to policy-makers and the public that any dissent undermines scientific knowledgeAs such, targeting dissent completely fails to address the real issues. The focus on dissent, and the threat that it seems to pose to public policy, misdiagnoses the problem as one of the public misunderstanding science, its quality and its authority. It assumes that scientific or technological knowledge is the only relevant factor in the development of policy and it ignores the role of other factors, such as value judgements about social benefits and harms, and institutional trust and reliability [45,46]. The emphasis on dissent, and thus on scientific knowledge, as the only or main factor in public policy decisions does not give due attention to these legitimate considerations.Furthermore, by misdiagnosing the problem, targeting dissent also impedes more effective solutions and prevents an informed debate about the values that should guide public policy. By framing policy debates solely as debates over scientific facts, the normative aspects of public policy are hidden and neglected. Relevant ethical, social and political values fail to be publicly acknowledged and openly discussed.Controversies over GMOs and climate policies have called attention to the negative effects of dissent in the scientific community. Based on the assumption that the public''s reluctance to support particular policies is the result of their inability to properly understand scientific evidence, scientists have tried to limit dissenting views that create doubt. However, as outlined above, targeting dissent as an obstacle to public policy probably does more harm than good. It fails to focus on the real problem at stake—that science is not the only relevant factor in sound policy-making. Of course, we do not deny that scientific evidence is important to the develop.ment of public policy and behavioural decisions. Rather, our claim is that this role is misunderstood and often oversimplified in ways that actually contribute to problems in developing sound science-based policies.? Open in a separate windowInmaculada de Melo-MartínOpen in a separate windowKristen Intemann  相似文献   

15.
16.
17.
Achieving food security for the future pits production increase against growth controlHow do we feed the nine billion people who are projected to inhabit the Earth by 2050? The issue is one of serious concern (Ash et al, 2010; Butler, 2010), as an increase in food production of up to 40% will be needed to cope with the growing population. In response, many scientists, politicians and economists have proposed a second ‘green revolution''. Their call references the first green revolution of the mid-twentieth century, which allowed many developing countries to drastically increase their food production. According to proponents of a new ‘global greener revolution'' (GGR), it will require an extensive transformation of agriculture to increase production and improve quality in an equitable and sustainable manner without compromising the environment (Godfray et al, 2010). Science and technology will be fundamental to achieving the goals of enhancing crop efficiency and food quality, as well as developing new protein sources (Beddington, 2010).…further analysis reveals that a GGR is not as charitable as it first appears; in fact, it could lead to undesired and even disastrous consequencesAt a glance, such a philanthropic proposal might seem the right thing to do, but further analysis reveals that a GGR is not as charitable as it first appears; in fact, it could lead to undesired and even disastrous consequences. This essay is therefore intended as a warning to scientists to think critically before signing up to a GRR: consider carefully the political, social and economic forces that would benefit from such a revolution and the potential long-term consequences for the environment and mankind.In an article for the Philosophical Transactions of the Royal Society, Sir John Beddington, the UK Government''s chief scientific adviser and professor of applied population biology at Imperial College London, lists the four main challenges for humanity in the twenty-first century as follows: to feed nine billion people in a sustainable way; to cope with increasing demands for clean water; to generate more energy; and to do all of this while mitigating and adapting to climate change (Beddington, 2010). Science will play a crucial role in this endeavour, provided the necessary investments are being made.The kinds of advances in science that the world requires are far reaching and various. Plant science will need to improve existing crops by breeding or genetic modification to increase photosynthetic efficiency, reduce the need for fertilizers, and develop new methods of pest, disease and weed control. Agricultural science and farmers need to develop sustainable livestock farming that reduces the emission of greenhouse gases, notably methane. Fisheries and aquaculture—high priorities for future food security—will require scientific knowledge and technological innovations to avoid over-fishing, to increase productivity and to deal with climate change and ocean acidification. Engineers will need to develop tools such as global positioning system-based fertilizing or watering systems and remote sensors to optimize the use of resources in agriculture. Nanotechnologies, genomics and electronics can be useful for improving disease diagnostics, the delivery of pesticides, fertilizers and water, or for monitoring and managing soil quality. Finally, science will also play a role in changing our diet to reduce the consumption of meat and dairy products and to develop alternative protein sources (The Royal Society, 2009; Beddington, 2010; Godfray et al, 2010).…the whole planet could turn into one giant farm for producing food and biofuels, with little or no wilderness leftTogether, these goals aim to achieve so-called sustainable intensification: producing more food from a given area while reducing the environmental impact (Godfray et al, 2010). This is a considerable challenge, resting on the hope that ‘greener'' innovations—mostly based on molecular biology and genetic manipulations of plants and farm animals—will be environmentally safer, although this is not a straightforward path in many cases.Scale matters in this endeavour, in terms of both space and time. Concerning space, the amount of land and sea surface needed to produce food for nine billion people will obviously be much larger than at present, any scientific progress notwithstanding. As such, given time, the whole planet could turn into one giant farm for producing food and biofuels, with little or no wilderness left. For defenders of the ownership approach (Bruce, 2008), for whom the Earth is ours to be exploited at our convenience, this vision might not be disturbing; nevertheless, the consequences would be catastrophic, not least because this approach gives no consideration to a sustainable future beyond this century. It is important to bear in mind that the GGR is proposed as a means to cope with human population growth during the next 40 years only. This might seem a long-term view from today''s perspective (Godfray et al, 2010), but it barely considers even the next two generations. A true long-term view needs to embrace a far more extended timeframe and consider our great-grandchildren and the world they might live in.If a GGR were a resounding success, most humans living beyond 2050 would be fed and healthy, but they would inherit a planetary farm with little wilderness and biodiversity. This, together with the possibility of notably extending life expectancy (Lucke et al, 2010) and the conviction that the next GGR will be always possible—as it has been in the past—will probably exacerbate population growth rates and the demand for another even-greener revolution. In fact, the human population could reach around 14 billion people by 2100 at current growth rates (FAO, 2006) and the number might be even higher if the proposed GGR succeeds.A true long-term view needs to embrace a far more extended timeframe and consider our great-grandchildren and the world they might live inAs the Earth''s carrying capacity is finite (Hueting, 2010; Pelletier, 2010), a GGR would lead to vanishing wilderness, resource exhaustion and, eventually, societal collapse. According to the latest estimates, we are already beyond the Earth''s carrying capacity and we would need around 1.2 Earths to support just the current population growth rate (WWF, 2008). In addition to resource exhaustion, another substantial problem of continued growth is the management of the waste generated by humankind, which at present is estimated to be around 30–40% of the food produced (Godfray et al, 2010). This mountain of refuse is likely to increase by orders of magnitude in the coming decades (Pelletier, 2010). Therefore, a GGR might be useful, at best, to cope with the near-term requirements of hungry humanity—the next two generations or so—but it is unsustainable in the medium to long term. Still, some solution is needed, as current and prognosticated starvation is ethically unacceptable and might lead to social conflict and war.In this context, the issue of equity or intra-generational social justice—despite the fact that it is mentioned as a premise in almost all proposals on food security—is rarely addressed. Almost everyone agrees that wealth and health should be equitably distributed throughout the world, but there are no firm proposals on how to achieve this goal and little progress has been made. It is a political problem that requires a political solution, but international organizations—notably the United Nations (UN) and its subordinate bodies—have not been able to tackle it, and there is little hope that they will in the current political climate.As the Earth''s carrying capacity is finite […], a GGR would lead to vanishing wilderness, resource exhaustion and, eventually, societal collapseThe inequality prevalent in the world serves the economic interest of the richest nations through the near-ubiquitous capitalist model, which equates development with increasing wealth, measured as the gross domestic product (GDP) of a country. Increasing globalization—with the recent demise of the socialist model—has promoted the export of the capitalist model to almost every country. As a result, and through the influence of organizations such as the World Trade Organization, the International Monetary Fund and the World Bank, capitalism has become the dominant economic model. Other issues such as international law, international security, economic development, social progress and human rights are subject to the political and economic interests of the richest economies. Social and environmental policies remain subordinate to capitalist concerns at both the local and regional scale (Pelletier, 2010). The inequality thus created is the cause of starvation and malnutrition in developing countries. Before 2005, more than 850 million people were undernourished. This number then increased by 75 million in only 2 years, owing mainly to the rise of wheat and maize prices for market reasons (Beddington, 2010). Today, hunger is not only a problem of overpopulation, but to a great extent, also of intra-generational injustice. This means that fighting starvation is a matter not only of growing more food, but also of creating social equity, which requires economic and political action.Future population growth and the corresponding demand for more food therefore support the current capitalistic model, which is based on economic growth and unequal wealth distribution. A GGR would be subject to this growth model; in other words, capitalism, not humanity, needs a GGR. Scientists should be aware of this and consider whether a GGR is really the best option from both a professional and personal point of view, as science should serve humanity and Earth, not any particular social, religious, ideological, political or economic system (Rull, 2010).Those who prefer a more sustainable path for future development might consider demand reduction—an option to avoid future food scarcity that is rarely considered (Westing, 2010). For their part, economists and politicians should also develop and implement alternative economic models that aim for a sustainable future for both humans and nature. The alternative—trying to reconcile economic growth, social justice and environmental safety—is akin to putting a square peg into a round hole (Lawn, 2010). In his 2008 book, The Bridge at the Edge of the World, the environmental advocate James Speth laments that modern capitalism is already out of control and that “growth is the enemy of environment. Economy and environment remain in collision” (Speth, 2009).Humanity has […] organized itself in such a way that different nations, ideologies, races, social classes and so on, compete with each as though they were ‘cultural species''There are alternative economic models that recognize ecological limits to human development and emphasize social equity. The first of these proposes a steady-state economy: one that has stopped growing in terms of GDP, but continues to improve quality of life and is maintained by an ecologically sustainable rate of resource throughput and a constant human population (Kerschner, 2010; Lawn, 2010). The second is a sustainable de-growth model that has been defined as “an equitable down-scaling of production and consumption that increases human well-being and enhances ecological conditions at the local and global level, in the short and long term” (Schneider et al, 2010). The paradigm is that human progress without economic growth is possible; it has been shown repeatedly that GDP per capita does not correlate with overall happiness above a certain level of satisfying people''s basic needs (Layard, 2010). According to these proposals, rich nations would need to start the transition to a steady-state economy through the reduction of GDP or de-growth within the next 5 years, and poor nations could take 20–40 years to make the transition in order to ensure a sustainable future. As many poor nations have the highest population growth rates, a first step should be to implement suitable controls to stabilize their populations with support from rich countries.…capitalism is a successful strategy with strong selective value to increase evolutionary fitnessThe defenders of de-growth emphasize that this process is not the same as recession or depression—there should be no social or quality of life deterioration—nor does it promote a return to a fictitious pre-industrial pastoral past. GDP reduction involves mainly components that require large-scale, resource-intensive production and socio-political and lifestyle changes (Schneider et al, 2010). Steady-state and de-growth models are based on the principle of ecological economics, which emphasizes the importance of the interactions between the environment and the economy, and of biophysical laws and constrains to human development (Costanza et al, 1997; Victor, 2010). Ecological economics is based on simple premises: the laws of thermodynamics, which state that the amount of energy in a closed system is constant and that any transformation degrades usable energy into entropy. All economic activities therefore deplete the available stock of usable energy and produce entropic waste; a closed system such as the Earth has a limited capacity to supply energy and material resources and to absorb the associated entropic waste (Pelletier, 2010).Among the different meanings of sustainability, ecological economics defends a so-called strong sustainability (Munda, 1997). This is in contrast to weak sustainability, which assumes an abundance of natural resources and that technological progress can increase the productivity of natural capital faster than it is being depleted. Weak sustainability could be considered a moderate version of the planetary ownership view (Bruce, 2008). By contrast, strong sustainability argues that natural capital—which provides raw materials for production and consumption, assimilates the resulting waste products, and provides amenity services and basic life-support functions on which human life depends—is largely non-substitutable (Neumayer, 2003; Dietz & Neumayer, 2007). The idea behind strong sustainability is to strike a balance between nature intervention and conservation—that is, the stewardship approach described by Bruce (2008). Despite its concern for nature, the idea of strong sustainability is still anthropocentric, as the primary objective is human survival and welfare. Therefore, strong sustainability could be viewed simply as a wiser form of planetary ownership than weak sustainability.Although steady-state and de-growth are interesting and promising proposals to meet the problem of food security, there are some concerns; namely, the considerable changes required of socio-political organizations and lifestyles, the adherence to an intrinsically anthropocentric concept of sustainability, and the lack of a consolidated programme to realize these ideas.Indeed, affluent democratic societies might be highly resistant to the necessary changes in lifestyle and consumption. A reduction of material living standards and consumption in industrialized countries would probably cause feelings of loss (Matthey, 2010). Few politicians or political parties with aspirations to government would be willing to defend such an unpopular proposal. Another obstacle in Western democratic systems is the short duration of each government, which is usually 4–5 years. Most governments are therefore reluctant to address problems that require large-scale, long-term changes. The problem is even more serious given that international organizations such as the UN, which were created specifically to meet such global challenges, remain subject to political and economic interests of the richer countries and therefore powerless to implement changes in such nations. Some have therefore proposed the creation of a new World Environmental Organization with the teeth and authority to legislate and enforce compliance (Pelletier, 2010).Economic crises such as the present one are excellent opportunities for questioning the dominant capitalist model…The problem of acceptance might be even worse in developing nations. The promise of capitalism has created expectations of wealth and consumption in these countries that people would be asked to renounce even before they had had a chance to enjoy them. Thus, population control is not sufficient, as most humans also need more food, better health and better living conditions. To mitigate this problem it has been proposed that developed countries should switch to a steady-state economy now, thereby leaving space for growth in the developing nations as a sign of intra-generational social justice (Kerschner, 2010). Of course, such economic growth should include effective population control in order to increment per capita income and to increase social and individual well-being. To make such growth sustainable, it would still require a GGR to increase food production and reduce the degradation of nature.Worldwide social justice is a complex issue that is beyond the scope of this article, but some ideas are pertinent in this context. Perhaps our lack of a species consciousness is a main obstacle to attaining goals such as intra-generational justice, the eradication of hunger, sustainable development and nature conservation—all of which are apparently desired by most people. Humanity has won its battles against its competitors—other violent, omnivorous species—but has organized itself in such a way that different nations, ideologies, races, social classes and so on, compete with each as though they were ‘cultural species''. In this context, capitalism is a successful strategy with strong selective value to increase evolutionary fitness. Some anthropologists believe that we are not yet humans, as we are still too attached to ancestral primate values such as selfishness, territoriality and violence (Carbonell & Sala, 2001). According to the same authors, the necessary species consciousness will emerge from altruism and the socialization of knowledge (Carbonell, 2007). Apart from the manifest ownership attitude of these anthropologists—whose ultimate aim is to replace the natural order with human organization of Earth—their concept of a global human species consciousness and how to attain it could be interesting for its use as a tool to address sustainable development under ecological economics principles.The formulation of ideas to achieve steady-state and de-growth economies is still in progress, but some clues to a solution can already be seen. For example, Lawn (2010) offers some macroeconomic considerations on how governments can regulate the private sector to facilitate the transition to a steady-state economy. Another interesting proposal is to reduce the dependence on markets and to develop alternative political and economic infrastructures with different values (Latouche, 2010). Steady-state and de-growth proposals are encouraging manifestations of the interest of certain economic sectors to develop credible and viable alternatives to uncontrolled growth, but more options are needed with special emphasis on reducing or avoiding anthropocentrism, and limiting or eliminating the prevalence of the market economy (Rull, 2010). Economic crises such as the present one are excellent opportunities for questioning the dominant capitalist model (Schneider et al, 2010; Johns, 2010). Now is the time for economic creativity and political will.…molecular research focused on food improvement is justified, but its contribution to either development model depends […] on social and political interestsIn the context of GGR, scientific research and technological development are parts of the so-called sustainable intensification to produce more food. In the steady-state or de-growth models, science and technology are tools to reduce the land needed to produce a given amount of food. The key is the big picture; molecular research focused on food improvement is justified, but its contribution to either development model depends—as do most, if not all, scientific contributions—on social and political interests. In this regard, the scientific and technological developments proposed in the context of a GGR, such as crop improvement and protection, sustainable livestock farming, fishing and aquaculture improvement, mechanization, engineering, nanotechnology and diet changes, should be encouraged anyway, as these can contribute greatly to more efficient and hopefully safer food production practices in the future.In summary, while global capitalism needs a GGR to continue along its unsustainable path, there are alternative models of human development that accept and address the biophysical constraints on economic and population growth on Earth. Some steady-state and de-growth alternatives have been proposed, based on the emerging discipline of ecological economics, but these would require a political and societal revolution, and a reassessment of the role of the market economy and true nature conservation. However, the basic principles of ecological economics seem potentially useful if we are to avoid a succession of GGRs that exhaust the Earth''s resources. The acceptance of those principles could represent a first step towards a better world.It is beyond all doubt that scientists defending a GGR have good intentions. But this should be done in a different scenario than the utopia of unlimited growth. Otherwise, politicians, stakeholders and the public in general might get a wrong idea of what is considered right from a scientific point of view and, what is worse, they might lose confidence in science and its practitioners.? Open in a separate windowValentí Rull  相似文献   

18.
19.
Maintaining active zone structure is crucial for synaptic function. In this issue of EMBO reports, NMNAT is shown to act as a chaperone that protects the active zone structural protein Bruchpilot from degradation.EMBO reports (2013) 14 1, 87–94 doi:10.1038/embor.2012.181Synapses perform several tasks independently from the cell body of the neuron, including synaptic vesicle recycling through endocytosis or local protein maturation and degradation. Failure to regulate protein function locally is detrimental to the nervous system as evidenced by neuronal dysfunctions that arise as a consequence of synaptic ageing. This relative synaptic autonomy comes with a need for mechanisms that ensure correct protein (re)folding, and there is accumulating evidence that key chap-erones have a central role in the regulation and maintenance of synaptic structural integrity and function [1]. Work by Grace Zhai''s group, published in this issue of EMBO reports, demonstrates a key role of the Drosophila nicotinamide mononucleotide adenylyltransferase (NMNAT) chaperone in the protection of active zone components against activity-induced degeneration (Fig 1; [2]).Open in a separate windowFigure 1Results reported by Zang and colleagues [2] reveal a specific role of nicotinamide mononucleotide adenylyltransferase (NMNAT) in preserving active zone structure against use-dependent decline. This protection is exerted by direct interaction with BRP and protection of this key structural protein against ubiquitination and subsequent degradation. BRP, Bruchpilot; Ub, ubiquitin.Active zones, the specialized sites for neurotransmitter release at presynaptic terminals, are characterized by a dense protein network called the cytomatrix at the active zone (CAZ). The protein machinery of the CAZ is responsible for efficient synaptic vesicle tethering, docking and fusion with the presynaptic membrane and, thus, for reliable signal transmission from the neuron to the postsynaptic cell. Clearly, proteins in the CAZ are tightly regulated, especially in response to external cues such as synaptic activity [3,4]. Yet, this particularly crowded protein environment might be favourable for the formation of non-functional—and sometimes toxic—protein aggregates. Chaperones that act at the synapse reduce the probability of crucial protein aggregation by preventing and reverting these inappropriate interactions, which happen as a result of environmental stress.One of these chaperones, the Drosophila neuroprotective NMNAT, was identified in a genetic screen for factors involved in synapse function [5]. Its chaperone activity was later confirmed by using in vitro and in vivo protein folding assays [6]. NMNAT null mutants show severe and early onset neurodegeneration, whereas neurodevelopment does not seem to be strongly affected. Interestingly, degeneration of photoreceptors lacking NMNAT can be significantly attenuated by limiting synaptic activity, either by rearing flies in the dark or by introducing the no receptor potential A (norpA) mutation that blocks phototransduction [5]. These results indicate that NMNAT protects adult neurons from activity-induced degeneration.In this issue of EMBO reports, Zang and colleagues report a role for NMNAT at the synapse. They observed that loss or reduced levels of NMNAT leads to a concomitant loss of several synaptic markers including cysteine-string protein (CSP), synaptotagmin and the active zone structural protein Bruchpilot (BRP). Remarkably, BRP was the only one of these proteins found to co-immunoprecipitate with NMNAT from brain lysates. Both proteins show approximately 50% co-localization at the neuromuscular junction when imaged by 3D-SIM super-resolution microscopy, suggesting that NMNAT might act directly as a chaperone for maintaining a functional BRP conformation.Consistent with a protective role of NMNAT against BRP degradation, RNA interference-mediated NMNAT knockdown leads to BRP ubiquitination, whereas this modification was not detected in control brain lysates. Given the involvement of the ubiquitin proteasome pathway in regulating synaptic development and function [1], the authors tested the effect of the proteasome inhibitor MG-132 on BRP ubiquitination. They observed an increased level of BRP ubiquitination in wild-type flies fed with this drug, suggesting a role for the proteasome in the clearance of ubiquitinated BRP. By contrast, overexpression of NMNAT reduces the level of BRP ubiquitination both in the absence and the presence of MG-132, providing further evidence for the protective role of this chaperone against ubiquitination of BRP (Fig 1).a key role of the […] nicotinamide mononucleotide adenylyltransferase (NMNAT) chaperone in the protection of active zone components against activity-induced degenerationBRP is a cytoskeletal-like protein that is an integral component of T-bars—electron-dense structures that project from the presynaptic membrane and around which synaptic vesicles cluster. In agreement with a protective role of NMNAT against BRP ubiquitination, reduced levels of this chaperone give rise to a marked decrease in T-bar size in an age-dependent manner (Fig 1). Active zones are known to show dynamic changes in response to synaptic activity, and NMNAT was previously reported to protect photoreceptors against activity-induced degeneration [5]. The authors thus tested the effect of minimizing photoreceptor activity on active zone structure by keeping flies in the dark or inhibiting phototransduction by means of the norpA mutation. Both manipulations largely reversed the effect of NMNAT knockdown on T-bar size. Absence of light exposure also significantly reduced the amount of BRP that co-immunoprecipitates with NMNAT, indicating that neuronal activity regulates NMNAT–BRP interaction. Further experiments are needed to examine whether there is a positive correlation between synaptic activity and BRP ubiquitination levels, and whether NMNAT can indeed keep T-bar structure intact by protecting BRP against this modification under conditions of high synaptic activity.Finally, the study shows that reduced NMNAT levels not only caused a loss of BRP from the synapse but also a specific mislocalization of this protein to the cell body, where it accumulates in clusters together with the remaining NMNAT protein. Under these conditions BRP co-immunoprecipitated with the stress-induced Hsp70, a chaperone classically used as a marker for protein aggregation. It is still unclear whether these BRP clusters form as a result of defective anterograde trafficking and/or of enhanced retrograde transport of BRP. In the absence of light stimulation T-bars are properly assembled in nmnat null photoreceptors, but at this stage a role of NMNAT in regulating the axonal transport of BRP under conditions of normal synaptic activity cannot be excluded. Noticeably, two independent recent reports show involvement of NMNAT in mitochondrial mobility [7,8].As BRP and NMNAT co-localize and interact with one another, the simplest model that accounts for all the observations by Zang et al is that NMNAT directly prevents activity-induced ubiquitination of BRP and subsequent degradation. Yet, as its name indicates, this chaperone is an essential enzyme in NAD synthesis. It was previously shown by the Bellen lab that mutant versions of NMNAT, impaired for NAD production, rescue photoreceptor degeneration caused by loss of NMNAT [5]. This strongly suggests that NAD production is not required for stabilization of BRP but this might need further scrutiny [9].…reduced levels of this chaperone [NMNAT] give rise to a marked decrease in T-bar sizeWhile providing further insights into the role of NMNAT at the active zone in Drosophila, the paper by Zang et al might also have important implications for neurodegeneration in mammals. When ectopically expressed in mice, Nmnat has a protective role against Wallerian degeneration, that is, synapse and axon degeneration that rapidly occurs distal from an axonal wound in wild-type animals. This process is significantly delayed in mice overexpressing a chimaeric protein consisting of the amino-terminal 70 residues of the ubiquitination factor E4B (Ube4b) fused through a linker to Nmnat1, known as the Wallerian degeneration slow (Wlds) protein. Conversely, mutations in the human NMNAT1 gene were characterized in several families with Leber congenital amaurosis—a severe, early-onset neurodegenerative disease of the retina [10,11,12,13]. As Wlds or Nmnat1 overexpression protects axons from degeneration in various disease models [9], Nmnat1 emerges as a promising candidate for developing protective strategies against axonal degeneration in peripheral neuropathies such as amyotrophic lateral sclerosis but also in glaucoma, AIDS and other diseases [9].  相似文献   

20.
Two recent studies, one in this issue of EMBO reports and one in Molecular Cell, identify Dop as a depupylase, ascribing a novel function to Dop and providing further evidence for the functional similarity of the prokaryotic Pup-modification system and the eukaryotic ubiquitin system.EMBO Rep (2010) advance online publication. doi: 10.1038/embor.2010.119Protein homeostasis is fundamental to the function of all cellular systems. In eukaryotes, the ubiquitin–proteasome pathway mediates regulated protein degradation. Intensive studies of the eukaryotic proteasome over the past decades have unravelled the complexity of this multi-subunit, ATP-dependent protease, and proteasome inhibitors are now established anticancer drugs (Finley, 2009). Prokaryotes use ATP-dependent proteases—such as Lon, ClpP and FtsH—for protein degradation. In addition, some bacteria in the class of Actinomycetes have acquired a proteasome which shares sequence and structural homology with its eukaryotic counterpart (Darwin, 2009). The function of the prokaryotic proteasome and its implication in pathogenesis is the subject of ongoing research. In Mycobacterium tuberculosis, proteasome activity is essential for the pathogen to persist in macrophages of the lung epithelium and could therefore be a target for antimicrobial treatment (Darwin, 2009).Labelling substrates for proteasomal degradation is well understood in eukaryotes, in which ubiquitin is attached to proteins that are subsequently recognized by proteasomal subunits and degraded (Finley, 2009). A similar tagging system has recently been identified in M. tuberculosis, in which the prokaryotic ubiquitin-like protein (Pup) serves as a ubiquitin analogue (Pearce et al, 2008). Subsequent proteome-wide studies have identified hundreds of Pup-tagged substrates in different mycobacteria, defining the ‘pupylome'' (Festa et al, 2010; Poulsen et al, 2010). Pupylated proteins are recognized by the proteasome-associated ATPase Mpa, that unfolds proteins before they are degraded in the proteolytic core (Darwin, 2009).Ubiquitination is reversed by specific deubiquitinases, but whether pupylation is also reversible was previously unknown. Two studies by Darwin and colleagues and—in this issue of EMBO reports—by Weber-Ban and colleagues have now demonstrated that Pup is removed from substrates when incubated with mycobacterial lysates (Burns et al, 2010; Imkamp et al, 2010). This suggests the presence of one or more ‘depupylases'', and indicates that pupylation is a complex and versatile process, much like ubiquitination.Pup and ubiquitin conjugation are mechanistically unrelated; ubiquitin is ligated by its carboxy-terminal glycine residue to lysine residues of target proteins by an enzymatic cascade, comprising E1, E2 and E3 enzymes (Dye & Schulman, 2007). By contrast, the pupylation machinery seems to be simpler; a single ligating enzyme, proteasome accessory factor A (PafA), mediates isopeptide bond formation between the C-terminal glutamic acid side-chain carboxyl group of Pup and a substrate lysine residue (Sutter et al, 2010).Only about half of the Pup-containing bacteria encode a glutamic acid residue at the C-terminus (Striebel et al, 2009). In the remaining species, including M. tuberculosis, the Pup gene encodes a C-terminal glutamine, which requires deamidation to glutamic acid before conjugation to substrates can occur. This activating deamidation step is carried out by the deamidase of Pup (Dop; Striebel et al, 2009). Curiously, the dop gene is conserved in all Pup-containing bacterial species (with the exception of Plesiocystis pacifica), including those in which initial deamidation is unnecessary.Imkamp et al and Burns et al now identify Dop as a depupylase in the Pup-modification pathway. Hydrolysis of Pup from model substrates in vitro is abolished in a dop-deficient bacterial lysate, or in lysate expressing a mutant form of dop, but can be restored by complementation with dop. Dop is able to depupylate many proteins when tested against the pupylome, suggesting a broad substrate spectrum. By contrast, without Dop the pupylome is unchanged over time, indicating that Dop might be the main depupylase in Mycobacteria. Purified Dop from M. tuberculosis shows depupylase activity against model substrates. Finally, Imkamp et al analyse a Dop homologue from Corynebacterium glutamicum that encodes PupGlu and hence does not depend on deamidation. This Dop homologue is expressed recombinantly and purified from Escherichia coli—which does not harbour the Pup-proteasome system—and shown to be an active depupylase in vitro.Both groups then investigated the functional relationship between Pup/Dop and the proteasomal ATPase Mpa. Burns et al found that Mpa is required in vivo for depupylation of a proteasome substrate. Imkamp et al found that Mpa significantly increases depupylation activity in vitro. The mechanism for this remains unclear, but full-length Pup seems to be essential for Mpa-mediated activation, as depupylation is not enhanced with an amino-terminally truncated Pup. Previous work has indicated that the N-terminus of Pup is required to initiate substrate unfolding (Striebel et al, 2010), and Imkamp et al speculated that unfolding makes the isopeptide bond more accessible for interaction with Dop. Evidence for this comes from the observation that Dop can cleave a peptide substrate with an accessible isopeptide bond at the same rate in the presence or absence of Mpa. It is intriguing that Dop co-purifies with the pupylome (Burns et al, 2010), this suggests that Dop has significant affinity but low activity for pupylated substrates. This might, however, prime the system for depupylation after Mpa interaction.Corynebacteria do not have a proteasome, but maintain the pupylation machinery comprising Pup, PafA, Dop and the proteasomal ATPase ARC (a homologue of Mpa). Here, the fate of Pup-tagged proteins cannot be proteasomal degradation, although substrate unfolding by ARC could initiate degradation by other proteases. However, pupylation in proteasome-deficient bacteria might suggest additional non-degradative functions for pupylation.Both studies demonstrate that Dop acts as a depupylase in Pup-containing bacteria, in addition to the previously reported deamidation role of Dop in mycobacteria. In fact, the chemical reactions underlying depupylation and deamidation are mechanistically similar. The key functional question that remains is whether Dop protects substrates from proteasomal degradation. Alternative explanations are that Dop acts in conjunction with Mpa or the proteasome to recycle Pup, or that it reverses non-degradative roles of pupylation (Fig 1).Open in a separate windowFigure 1Emerging roles for Dop. (A) The pupylation system. (1) Dop functions as a deamidase, converting PupGln to PupGlu. PafA ligates PupGlu to substrates, which are targeted to Mpa and the proteasome and are degraded. (2) Dop can reverse pupylation on substrates and might rescue substrates from degradation. (B) Dop might act to recycle Pup, either (3a) at the Mpa/proteasome level or (3b) by binding to pupylated substrates, where Mpa-mediated substrate unfolding activates Dop. (C) (4) The existence of Dop in proteasome deficient bacteria might indicate that Dop antagonizes non-degradational roles for Pup. Dop, deamidase of Pup; Mpa, Mycobacterium proteasome-associated ATPase; PafA, proteasome accessory factor A; Pup, prokaryotic ubiquitin-like protein.So far, nothing is known about the regulation of Dop. It will be interesting to analyse expression profiles to determine whether Dop is regulated independently of other proteins in this system. Other open questions remain about the existence of co-factors and binding partners, and the organization of the Pup–Dop–Mpa network. Structural studies of the Dop enzyme will hopefully increase our understanding of its roles in depupylation.In conclusion, Dop in the pupylation system has the potential to combine all known functions of deubiquitinases in the ubiquitin system: processing of precursors, rescuing substrates from degradation, recycling the modifier and reversing potential non-degradative roles of pupylation. The identification of the first depupylase opens an exciting new research field to unravel the functional consequences of depupylation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号