共查询到20条相似文献,搜索用时 31 毫秒
1.
Thomas Erren and colleagues point out that studies on light and circadian rhythmicity in humans have their own interesting pitfalls, of which all researchers should be mindful.We would like to compliment, and complement, the recent Opinion in EMBO reports by Stuart Peirson and Russell Foster (2011), which calls attention to the potential obstacles associated with linking observations on light and circadian rhythmicity made on nocturnal mice to diurnally active humans. Pitfalls to consider include that qualitative extrapolations from short-lived rodents to long-lived humans, quantitative extrapolations of very different doses (Gold et al, 1992), and the varying sensitivities of each species to experimental optical radiation as a circadian stimulus (Bullough et al, 2006) can all have a critical influence on an experiment. Thus, Peirson & Foster remind us that “humans are not big mice”. We certainly agree, but we also thought it worthwhile to point out that human studies have their own interesting pitfalls, of which all researchers should be mindful.Many investigations with humans—such as testing the effects of different light exposures on alertness, cognitive performance, well-being and depression—can suffer from what has been coined as the ‘Hawthorne effect''. The term is derived from a series of studies conducted at the Western Electric Company''s Hawthorne Works near Chicago, Illinois, between 1924 and 1932, to test whether the productivity of workers would change with changing illumination levels. One important punch line was that productivity increased with almost any change that was made at the workplaces. One prevailing interpretation of these findings is that humans who know that they are being studied—and in most investigations they cannot help but notice—might exhibit responses that have little or nothing to do with what was intended as the experiment. Those who conduct circadian biology studies in humans try hard to eliminate possible ‘Hawthorne effects'', but every so often, all they can do is to hope for the best and expect the Hawthorne effect to be insignificant.Even so, and despite the obstacles to circadian experiments with both mice and humans, the wealth of information from work in both species is indispensable. To exemplify, in the last handful of years alone, experimental research in mice has substantially contributed to our understanding of the retinal interface between visible light and circadian circuitry (Chen et al, 2011); has shown that disturbances of the circadian systems through manipulations of the light–dark cycles might accelerate carcinogenesis (Filipski et al, 2009); and has suggested that perinatal light exposure—through an imprinting of the stability of circadian systems (Ciarleglio et al, 2011)—might be related to a human''s susceptibility to mood disorders (Erren et al, 2011a) and internal cancer developments later in life (Erren et al, 2011b). Future studies in humans must now examine whether, and to what extent, what was found in mice is applicable to and relevant for humans.The bottom line is that we must be aware of, and first and foremost exploit, evolutionary legacies, such as the seemingly ubiquitous photoreceptive clockwork that marine and terrestrial vertebrates—including mammals such as mice and humans—share (Erren et al, 2008). Translating insights from studies in animals to humans (Erren et al, 2011a,b), and vice versa, into testable research can be a means to one end: to arrive at sensible answers to pressing questions about light and circadian clockworks that, no doubt, play key roles in human health and disease. Pitfalls, however, abound on either side, and we agree with Peirson & Foster that they have to be recognized and monitored. 相似文献
2.
3.
Elucidating the temporal order of silencing 总被引:1,自引:0,他引:1
Izaurralde E 《EMBO reports》2012,13(8):662-663
4.
5.
6.
7.
8.
Gerald Schatten 《EMBO reports》2013,14(1):4-4
The differentiation of pluripotent stem cells into various progeny is perplexing. In vivo, nature imposes strict fate constraints. In vitro, PSCs differentiate into almost any phenotype. Might the concept of ‘cellular promiscuity'' explain these surprising behaviours?John Gurdon''s [1] and Shinya Yamanaka''s [2] Nobel Prize involves discoveries that vex fundamental concepts about the stability of cellular identity [3,4], ageing as a rectified path and the differences between germ cells and somatic cells. The differentiation of pluripotent stem cells (PSCs) into progeny, including spermatids [5] and oocytes [6], is perplexing. In vivo, nature imposes strict fate constraints. Yet in vitro, reprogrammed PSCs liberated from the body government freely differentiate into any phenotype—except placenta—violating even somatic cell against germ cell segregations. Albeit that it is anthropomorphic, might the concept of ‘cellular promiscuity'' explain these surprising behaviours?Fidelity to one''s differentiated state is nearly universal in vivo—even cancers retain some allegiance. Appreciating the mechanisms in vitro that liberate reprogrammed cells from the numerous constraints governing development in vivo might provide new insights. Similarly to highway guiderails, a range of constraints preclude progeny cells within embryos and organisms from travelling too far away from the trajectory set by their ancestors. Restrictions are imposed externally—basement membranes and intercellular adhesions; internally—chromatin, cytoskeleton, endomembranes and mitochondria; and temporally by ageing.‘Cellular promiscuity'' was glimpsed previously during cloning; it was seen when somatic cells successfully ‘fertilized'' enucleated oocytes in amphibians [1] and later with ‘Dolly'' [7]. Embryonic stem cells (ESCs) corroborate this. The inner cell mass of the blastocyst cells develops faithfully, but liberation from the trophoectoderm generates pluripotent ESCs in vitro, which are freed from fate and polarity restrictions. These freedom-seeking ESCs still abide by three-dimensional rules as they conform to chimaera body patterning when injected into blastocysts. Yet if transplanted elsewhere, this results in chaotic teratomas or helter-skelter in vitro differentiation—that is, pluripotency.August Weismann''s germ plasm theory, 130 years ago, recognized that gametes produce somatic cells, never the reverse. Primordial germ cell migrations into fetal gonads, and parent-of-origin imprints, explain how germ cells are sequestered, retaining genomic and epigenomic purity. Left uncontaminated, these future gametes are held in pristine form to parent the next generation. However, the cracks separating germ and somatic lineages in vitro are widening [5,6]. Perhaps, they are restrained within gonads not for their purity but to prevent wild, uncontrolled misbehaviours resulting in germ cell tumours.The ‘cellular promiscuity'' concept regarding PSCs in vitro might explain why cells of nearly any desired lineage can be detected using monospecific markers. Are assays so sensitive that rare cells can be detected in heterogeneous cultures? Certainly population heterogeneity is considered for transplantable cells—dopaminergic neurons and islet cells—compared with applications needing few cells—sperm and oocytes. This dilemma of maintaining cellular identity in vitro after reprogramming is significant. If not addressed, the value of unrestrained induced PSCs (iPSCs) as reliable models for ‘diseases in a dish'', let alone for subsequent therapeutic transplantations, might be diminished. X-chromosome re-inactivation variants in differentiating human PSCs, epigenetic imprint errors and copy number variations are all indicators of in vitro infidelity. PSCs, which are held to be undifferentiated cells, are artefacts after all, as they undergo their programmed development in vivo.If correct, the hypothesis accounts for concerns raised about the inherent genomic and epigenomic unreliability of iPSCs; they are likely to be unfaithful to their in vivo differentiation trajectories due to both the freedom from in vivo developmental programmes, as well as poorly characterized modifications in culture conditions. ‘Memory'' of the PSC''s identity in vivo might need to be improved by using approaches that might not fully erase imprints. Regulatory authorities, including the Food & Drug Administration, require evidence that cultured PSCs do retain their original cellular identity. Notwithstanding fidelity lapses at the organismal level, the recognition that our cells have intrinsic freedom-loving tendencies in vitro might generate better approaches for only partly releasing somatic cells into probation, rather than full emancipation. 相似文献
9.
10.
Rubinsztein DC 《EMBO reports》2012,13(3):173-174
Two articles—one published online in January and in the March issue EMBO reports—implicate autophagy in the control of appetite by regulating neuropeptide production in hypothalamic neurons. Autophagy decline with age in POMC neurons induces obesity and metabolic syndrome.Kaushik et al. EMBO reports, this issue doi:10.1038/embor.2011.260Macroautophagy, which I will call autophagy, is a critical process that degrades bulk cytoplasm, including organelles, protein oligomers and a range of selective substrates. It has been linked with diverse physiological and disease-associated functions, including the removal of certain bacteria, protein oligomers associated with neurodegenerative diseases and dysfunctional mitochondria [1]. However, the primordial role of autophagy—conserved from yeast to mammals—appears to be its ability to provide nutrients to starving cells by releasing building blocks, such as amino acids and free fatty acids, obtained from macromolecular degradation. In yeast, autophagy deficiency enhances death in starvation conditions [2], and in mice it causes death from starvation in the early neonatal period [3,4]. Two recent articles from the Singh group—one of them in this issue of EMBO reports—also implicate autophagy in central appetite regulation [5,6].Autophagy seems to decline with age in the liver [7], and it has thus been assumed that autophagy declines with age in all tissues, but this has not been tested rigorously in organs such as the brain. Conversely, specific autophagy upregulation in Caenorhabditis elegans and Drosophila extends lifespan, and drugs that induce autophagy—but also perturb unrelated processes, such as rapamycin—promote longevity in rodents [8].Autophagy literally means self-eating, and it is therefore interesting to see that this cellular ‘self-eating'' has systemic roles in mammalian appetite control. The control of appetite is influenced by central regulators, including various hormones and neurotransmitters, and peripheral regulators, including hormones, glucose and free fatty acids [9]. Autophagy probably has peripheral roles in appetite and energy balance, as it regulates lipolysis and free fatty acid release [10]. Furthermore, Singh and colleagues have recently implicated autophagy in central appetite regulation [5,6].The arcuate nucleus in the hypothalamus has received extensive attention as an integrator and regulator of energy homeostasis and appetite. Through its proximity to the median eminence, which is characterized by an incomplete blood–brain barrier, these neurons rapidly sense metabolic fluctuations in the blood. There are two different neuronal populations in the arcuate nucleus, which appear to have complementary effects on appetite (Fig 1). The proopiomelanocortin (POMC) neurons produce the neuropeptide precursor POMC, which is cleaved to form α-melanocyte stimulating hormone (α-MSH), among several other products. The α-MSH secreted from these neurons activates melanocortin 4 receptors on target neurons in the paraventricular nucleus of the hypothalamus, which ultimately reduce food intake. The second group of neurons contain neuropeptide Y (NPY) and Agouti-related peptide (AgRP). Secreted NPY binds to downstream neuronal receptors and stimulates appetite. AgRP blocks the ability of α-MSH to activate melanocortin 4 receptors [11]. Furthermore, AgRP neurons inhibit POMC neurons [9].Open in a separate windowFigure 1Schematic diagram illustrating the complementary roles of POMC and NPY/AgRP neurons in appetite control. AgRP, Agouti-related peptide; MC4R, melanocortin 4 receptor; α-MSH, α-melanocyte stimulating hormone; NPY, neuropeptide Y; POMC, proopiomelanocortin.The first study from Singh''s group started by showing that starvation induces autophagy in the hypothalamus [5]. This finding alone merits some comment. Autophagy is frequently assessed by using phosphatidylethanolamine-conjugated Atg8/LC3 (LC3-II), which is specifically associated with autophagosomes and autolysosomes. LC3-II levels on western blot and the number of LC3-positive vesicles strongly correlate with the number of autophagosomes [1]. To assess whether LC3-II formation is altered by a perturbation, its level can be assessed in the presence of lysosomal inhibitors, which inhibit LC3-II degradation by blocking autophagosome–lysosome fusion [12]. Therefore, differences in LC3-II levels in response to a particular perturbation in the presence of lysosomal inhibitors reflect changes in autophagosome synthesis. An earlier study using GFP-LC3 suggested that autophagy was not upregulated in the brains of starved mice, compared with other tissues where this did occur [13]. However, this study only measured steady state levels of autophagosomes and was performed before the need for lysosomal inhibitors was appreciated. Subsequent work has shown rapid flux of autophagosomes to lysosomes in primary neurons, which might confound analyses without lysosomal inhibitors [14]. Thus, the data of the Singh group—showing that autophagy is upregulated in the brain by a range of methods including lysosomal inhibitors [5]—address an important issue in the field and corroborate another recent study that examined this question by using sophisticated imaging methods [15].“…decreasing autophagy with ageing in POMC neurons could contribute to the metabolic problems associated with age”Singh and colleagues then analysed mice that have a specific knockout of the autophagy gene Atg7 in AgRP neurons [5]. Although fasting increases AgRP mRNA and protein levels in normal mice, these changes were not seen in the knockout mice. AgRP neurons provide inhibitory signals to POMC neurons, and Kaushik and colleagues found that the AgRP-specific Atg7 knockout mice had higher levels of POMC and α-MSH, compared with the normal mice. This indicated that starvation regulates appetite in a manner that is partly dependent on autophagy. The authors suggested that the peripheral free fatty acids released during starvation induce autophagy by activating AMP-activated protein kinase (AMPK), a known positive regulator of autophagy. This, in turn, enhances degradation of hypothalamic lipids and increases endogenous intracellular free fatty acid concentrations. The increased intracellular free fatty acids upregulate AgRP mRNA and protein expression. As AgRP normally inhibits POMC/α-MSH production in target neurons, a defect in AgRP responses in the autophagy-null AgRP neurons results in higher α-MSH levels, which could account for the decreased mouse bodyweight.In follow-up work, Singh''s group have now studied the effects of inhibiting autophagy in POMC neurons, again using Atg7 deletion [6]. These mice, in contrast to the AgRP autophagy knockouts, are obese. This might be accounted for, in part, by an increase in POMC preprotein levels and its cleavage product adrenocorticotropic hormone in the knockout POMC neurons, which is associated with a failure to generate α-MSH. Interestingly, these POMC autophagy knockout mice have impaired peripheral lipolysis in response to starvation, which the authors suggest might be due to reduced central sympathetic tone to the periphery from the POMC neurons. In addition, POMC-neuron-specific Atg7 knockout mice have impaired glucose tolerance.This new study raises several interesting issues. How does the autophagy defect in the POMC neurons alter the cleavage pattern of POMC? Is this modulated within the physiological range of autophagy activity fluctuations in response to diet and starvation? Importantly, in vivo, autophagy might fluctuate similarly (or possibly differently) in POMC and AgRP neurons in response to diet and/or starvation. Given the tight interrelation of these neurons, how does this affect their overall response to appetite regulation in wild-type animals?Finally, the study also shows that hypothalamic autophagosome formation is decreased in older mice. To my knowledge, this is the first such demonstration of this phenomenon in the brain. The older mice phenocopied aspects of the POMC-neuron autophagy null mice—increased hypothalamic POMC preprotein and ACTH and decreased α-MSH, along with similar adiposity and lipolytic defects, compared with young mice. These data are provocative from several perspectives. In the context of metabolism, it is tantalizing to consider that decreasing autophagy with ageing in POMC neurons could contribute to the metabolic problems associated with ageing. Again, this model considers the POMC neurons in isolation, and it would be important to understand how reduced autophagy in aged AgRP neurons counterbalances this situation. In a more general sense, the data strongly support the concept that neuronal autophagy might decline with age.Autophagy is a major clearance route for many mutant, aggregate-prone intracytoplasmic proteins that cause neurodegenerative disease, such as tau (Alzheimer disease), α-synuclein (Parkinson disease), and huntingtin (Huntington disease), and the risk of these diseases is age-dependent [1]. Thus, it is tempting to suggest that the dramatic age-related risks for these diseases could be largely due to decreased neuronal capacity of degrading these toxic proteins. Neurodegenerative pathology and age-related metabolic abonormalities might be related—some of the metabolic disturbances that occur in humans with age could be due to the accumulation of such toxic proteins. High levels of these proteins are seen in many people who do not have, or who have not yet developed, neurodegenerative diseases, as many of them start to accumulate decades before any sign of disease. These proteins might alter metabolism and appetite either directly by affecting target neurons, or by influencing hormonal and neurotransmitter inputs into such neurons. 相似文献
11.
The smooth extrapolation of results from mouse to man faces many significant obstacles, not least that humans are diurnally active primates whereas mice are nocturnally active rodents.All animals display profound variations in their physiology over a 24-h period, including changes in locomotor activity, hormone production, metabolism and neural activity. These rhythms are endogenously generated by the circadian system and provide a selective advantage by enabling organisms to anticipate both daily and seasonal changes in the environment. As a result, normal physiology is dynamic, showing constant circadian modulation of homeostatic set-points (Mrosovsky, 1990). Although this is adaptive for the organism, it poses a problem for biological measurement, whether physiological, behavioural or biochemical. For example, in mammals, approximately 10% of the genes expressed in any given tissue show significant circadian variation (Storch et al, 2002). Toxicology and pharmacology studies have also demonstrated dramatically different effects at different times of the day (Burns, 2000). As a result, time-of-day and the temporal niche of the animal model need to be taken into consideration in the design of any experiment.Mice have become the organism of choice in biomedical research due to the availability of extensive genomic information and well-established methods of genetic modification. This has resulted in the production of an enormous range of transgenic and knockout models, which are used widely in attempts to demonstrate genotype–phenotype associations (Crawley, 2008). Much of this research is undertaken in an attempt to understand human physiology and disease. However, the smooth extrapolation of results from mouse to man faces many obstacles, not least that humans are diurnally active primates, whereas mice are nocturnally active rodents. During the daytime a mouse is normally inactive or asleep, and as animal facilities are generally operational between 07:00 and 17:00, most of the data collected from mice is from a mammalian model in the resting state. In the drug development process, many compounds are excluded on the basis of efficacy or adverse effects. One wonders at the potential lost opportunities that have occurred because differences in temporal biology have not been taken into account.Although there is a growing awareness of the importance of circadian rhythms in experimental design, it is not just time-of-day effects that represent a potential problem. Most behavioural phenotyping is undertaken in the light, when mice are normally inactive or asleep, and when in the wild they would be concealed from light. Several studies have assessed the impact of light and dark on behavioural testing. Mice are photophobic and normally avoid bright light, a phenomenon that is exploited in many tests such as the open-field and light/dark-box paradigms (Crawley, 2008). Open-field testing has demonstrated dramatic differences in exploratory activity in mice in different levels of light (Valentinuzzi et al, 2000). In DBA mice, testing in the light has been shown to result in behavioural inhibition and cognitive disruption (Roedel et al, 2006). Conversely, testing in the dark results in improved discrimination in a range of behavioural tests, including the widely used SHIRPA test battery (Hossain et al, 2004). Collectively, these findings suggest that testing under different light conditions produces differences in behaviour, and that testing in the dark provides superior outcomes. By contrast, there have been relatively few studies that have assessed the effects of circadian phase on performance in behavioural tests (that is, under constant conditions). Beeler et al (2006) found no effect of circadian phase on a range of behavioural tests. However, other studies have demonstrated a notable impact of circadian phase on learning and memory, which would be expected to translate into performance (Chaudhury & Colwell, 2002).To a circadian biologist, it is surprising that testing at different circadian phases does not result in more profound differences in behavioural performance. After all, toxicity effects can vary from 20% to 80% in one day, and changes in gene expression can vary by more than 100-fold. One explanation for this might be that the stimuli involved in many test protocols, including handling, could override the normal circadian gating of arousal. After all, we are not slaves to our internal clocks, and indeed, it would be maladaptive if we were. Environmental factors such as light exert acute effects on arousal. In mice, light exposure during the active phase produces an acute suppression of locomotor activity and induction of sleep (Lupi et al, 2008). Conversely, light exposure during the inactive period gives rise to an increase in activity and heart rate (Thompson et al, 2008). As levels of arousal are closely linked to performance, a challenge for the future is to determine the way in which time-of-day and responsiveness to environmental stimuli interact to regulate behaviour. 相似文献
12.
13.
Geoffrey Miller 《EMBO reports》2012,13(10):880-884
Runaway consumerism imposes social and ecological costs on humans in much the same way that runaway sexual ornamentation imposes survival costs and extinction risks on other animals.Sex and marketing have been coupled for a very long time. At the cultural level, their relationship has been appreciated since the 1960s ‘Mad Men'' era, when the sexual revolution coincided with the golden age of advertising, and marketers realized that ‘sex sells''. At the biological level, their interplay goes much further back to the Cambrian explosion around 530 million years ago. During this period of rapid evolutionary expansion, multicellular organisms began to evolve elaborate sexual ornaments to advertise their genetic quality to the most important consumers of all in the great mating market of life: the opposite sex.Maintaining the genetic quality of one''s offspring had already been a problem for billions of years. Ever since life originated around 3.7 billion years ago, RNA and DNA have been under selection to copy themselves as accurately as possible [1]. Yet perfect self-replication is biochemically impossible, and almost all replication errors are harmful rather than helpful [2]. Thus, mutations have been eroding the genomic stability of single-celled organisms for trillions of generations, and countless lineages of asexual organisms have suffered extinction through mutational meltdown—the runaway accumulation of copying errors [3]. Only through wildly profligate self-cloning could such organisms have any hope of leaving at least a few offspring with no new harmful mutations, so they could best survive and reproduce.Around 1.5 billion years ago, bacteria evolved the most basic form of sex to minimize mutation load: bacterial conjugation [4]. By swapping bits of DNA across the pilus (a tiny intercellular bridge) a bacterium can replace DNA sequences compromised by copying errors with intact sequences from its peers. Bacteria finally had some defence against mutational meltdown, and they thrived and diversified.Then, with the evolution of genuine sexual reproduction through meiosis, perhaps around 1.2 billion years ago, eukaryotes made a great advance in their ability to purge mutations. By combining their genes with a mate''s genes, they could produce progeny with huge genetic variety—and crucially with a wider range of mutation loads [5]. The unlucky offspring who happened to inherit an above-average number of harmful mutations from both parents would die young without reproducing, taking many mutations into oblivion with them. The lucky offspring who happened to inherit a below-average number of mutations from both parents would live long, prosper and produce offspring of higher genetic quality. Sexual recombination also made it easier to spread and combine the rare mutations that happened to be useful, opening the way for much faster evolutionary advances [6]. Sex became the foundation of almost all complex life because it was so good at both short-term damage limitation (purging bad mutations) and long-term innovation (spreading good mutations).Sex became the foundation of almost all complex life because it was so good at both short-term damage limitation […] and long-term innovation…Yet, single-celled organisms always had a problem with sex: they were not very good at choosing sexual partners with the best genes, that is, the lowest mutation loads. Given bacterial capabilities for chemical communication such as quorum-sensing [7], perhaps some prokaryotes and eukaryotes paid attention to short-range chemical cues of genetic quality before swapping genes. However, mating was mainly random before the evolution of longer-range senses and nervous systems.All of this changed profoundly with the Cambrian explosion, which saw organisms undergoing a genetic revolution that increased the complexity of gene regulatory networks, and a morphological revolution that increased the diversity of multicellular body plans. It was also a neurological and psychological revolution. As organisms became increasingly mobile, they evolved senses such as vision [8] and more complex nervous systems [9] to find food and evade predators. However, these new senses also empowered a sexual revolution, as they gave animals new tools for choosing sexual partners. Rather than hooking up randomly with the nearest mate, animals could now select mates based on visible cues of genetic quality such as body size, energy level, bright coloration and behavioural competence. By choosing the highest quality mates, they could produce higher quality offspring with lower mutation loads [10]. Such mate choice imposed selection on all of those quality cues to become larger, brighter and more conspicuous, amplifying them into true sexual ornaments: biological luxury goods such as the guppy''s tail and the peacock''s train that function mainly to impress and attract females [11]. These sexual ornaments evolved to have a complex genetic architecture, to capture a larger share of the genetic variation across individuals and to reveal mutation load more accurately [12].Ever since the Cambrian, the mating market for sexually reproducing animal species has been transformed to some degree into a consumerist fantasy world of conspicuous quality, status, fashion, beauty and romance. Individuals advertise their genetic quality and phenotypic condition through reliable, hard-to-fake signals or ‘fitness indicators'' such as pheromones, songs, ornaments and foreplay. Mates are chosen on the basis of who displays the largest, costliest, most precise, most popular and most salient fitness indicators. Mate choice for fitness indicators is not restricted to females choosing males, but often occurs in both sexes [13], especially in socially monogamous species with mutual mate choice such as humans [14].Thus, for 500 million years, animals have had to straddle two worlds in perpetual tension: natural selection and sexual selection. Each type of selection works through different evolutionary principles and dynamics, and each yields different types of adaptation and biodiversity. Neither fully dominates the other, because sexual attractiveness without survival is a short-lived vanity, whereas ecological competence without reproduction is a long-lived sterility. Natural selection shapes species to fit their geographical habitats and ecological niches, and favours efficiency in growth, foraging, parasite resistance, predator evasion and social competition. Sexual selection shapes each sex to fit the needs, desires and whims of the other sex, and favours conspicuous extravagance in all sorts of fitness indicators. Animal life walks a fine line between efficiency and opulence. More than 130,000 plant species also play the sexual ornamentation game, having evolved flowers to attract pollinators [15].The sexual selection world challenges the popular misconception that evolution is blind and dumb. In fact, as Darwin emphasized, sexual selection is often perceptive and clever, because animal senses and brains mediate mate choice. This makes sexual selection closer in spirit to artificial selection, which is governed by the senses and brains of human breeders. In so far as sexual selection shaped human bodies, minds and morals, we were also shaped by intelligent designers—who just happened to be romantic hominids rather than fictional gods [16].Thus, mate choice for genetic quality is analogous in many ways to consumer choice for brand quality [17]. Mate choice and consumer choice are both semi-conscious—partly instinctive, partly learned through trial and error and partly influenced by observing the choices made by others. Both are partly focused on the objective qualities and useful features of the available options, and partly focused on their arbitrary, aesthetic and fashionable aspects. Both create the demand that suppliers try to understand and fulfil, with each sex striving to learn the mating preferences of the other, and marketers striving to understand consumer preferences through surveys, focus groups and social media data mining.…single-celled organisms always had a problem with sex: they were not very good at choosing the sexual partners with the best genes…Mate choice and consumer choice can both yield absurdly wasteful outcomes: a huge diversity of useless, superficial variations in the biodiversity of species and the economic diversity of brands, products and packaging. Most biodiversity seems to be driven by sexual selection favouring whimsical differences across populations in the arbitrary details of fitness indicators, not just by naturally selected adaptation to different ecological niches [18]. The result is that within each genus, a species can be most easily identified by its distinct mating calls, sexual ornaments, courtship behaviours and genital morphologies [19], not by different foraging tactics or anti-predator defences. Similarly, much of the diversity in consumer products—such as shirts, cars, colleges or mutual funds—is at the level of arbitrary design details, branding, packaging and advertising, not at the level of objective product features and functionality.These analogies between sex and marketing run deep, because both depend on reliable signals of quality. Until recently, two traditions of signalling theory developed independently in the biological and social sciences. The first landmark in biological signalling theory was Charles Darwin''s analysis of mate choice for sexual ornaments as cues of good fitness and fertility in his book, The Descent of Man, and Selection in Relation to Sex (1871). Ronald Fisher analysed the evolution of mate preferences for fitness indicators in 1915 [20]. Amotz Zahavi proposed the ‘handicap principle'', arguing that only costly signals could be reliable, hard-to-fake indicators of genetic quality or phenotypic condition in 1975 [21]. Richard Dawkins and John Krebs applied game theory to analyse the reliability of animal signals, and the co-evolution of signallers and receivers in 1978 [22]. In 1990, Alan Grafen eventually proposed a formal model of the ‘handicap principle'' [23], and Richard Michod and Oren Hasson analysed ‘reliable indicators of fitness'' [24]. Since then, biological signalling theory has flourished and has informed research on sexual selection, animal communication and social behaviour.…new senses also empowered a sexual revolution […] Rather than hooking up randomly with the nearest mate, animals could now select mates based on visible cues of genetic quality…The parallel tradition of signalling theory in the social sciences and philosophy goes back to Aristotle, who argued that ethical and rational acts are reliable signals of underlying moral and cognitive virtues (ca 350–322 BC). Friedrich Nietzsche analysed beauty, creativity, morality and even cognition as expressions of biological vigour by using signalling logic (1872–1888). Thorstein Veblen proposed that conspicuous luxuries, quality workmanship and educational credentials act as reliable signals of wealth, effort and taste in The Theory of the Leisure Class (1899), The Instinct of Workmanship (1914) and The Higher Learning in America (1922). Vance Packard used signalling logic to analyse social class, runaway consumerism and corporate careerism in The Status Seekers (1959), The Waste Makers (1960) and The Pyramid Climbers (1962), and Ernst Gombrich analysed beauty in art as a reliable signal of the artist''s skill and effort in Art and Illusion (1977) and A Sense of Order (1979). Michael Spence developed formal models of educational credentials as reliable signals of capability and conscientiousness in Market Signalling (1974). Robert Frank used signalling logic to analyse job titles, emotions, career ambitions and consumer luxuries in Choosing the Right Pond (1985), Passions within Reason (1988), The Winner-Take-All-Society (1995) and Luxury Fever (2000).Evolutionary psychology and evolutionary anthropology have been integrating these two traditions to better understand many puzzles in human evolution that defy explanation in terms of natural selection for survival. For example, signalling theory has illuminated the origins and functions of facial beauty, female breasts and buttocks, body ornamentation, clothing, big game hunting, hand-axes, art, music, humour, poetry, story-telling, courtship gifts, charity, moral virtues, leadership, status-seeking, risk-taking, sports, religion, political ideologies, personality traits, adaptive self-deception and consumer behaviour [16,17,25,26,27,28,29].Building on signalling theory and sexual selection theory, the new science of evolutionary consumer psychology [30] has been making big advances in understanding consumer goods as reliable signals—not just signals of monetary wealth and elite taste, but signals of deeper traits such as intelligence, moral virtues, mating strategies and the ‘Big Five'' personality traits: openness, conscientiousness, agreeableness, extraversion and emotional stability [17]. These individual traits are deeper than wealth and taste in several ways: they are found in the other great apes, are heritable across generations, are stable across life, are important in all cultures and are naturally salient when interacting with mates, friends and kin [17,27,31]. For example, consumers seek elite university degrees as signals of intelligence; they buy organic fair-trade foods as signals of agreeableness; and they value foreign travel and avant-garde culture as signals of openness [17]. New molecular genetics research suggests that mutation load accounts for much of the heritable variation in human intelligence [32] and personality [33], so consumerist signals of these traits might be revealing genetic quality indirectly. If so, conspicuous consumption can be seen as just another ‘good-genes indicator'' favoured by mate choice.…sexual attractiveness without survival is a short-lived vanity, whereas ecological competence without reproduction is a long-lived sterilityIndeed, studies suggest that much conspicuous consumption, especially by young single people, functions as some form of mating effort. After men and women think about potential dates with attractive mates, men say they would spend more money on conspicuous luxury goods such as prestige watches, whereas women say they would spend more time doing conspicuous charity activities such as volunteering at a children''s hospital [34]. Conspicuous consumption by males reveals that they are pursuing a short-term mating strategy [35], and this activity is most attractive to women at peak fertility near ovulation [36]. Men give much higher tips to lap dancers who are ovulating [37]. Ovulating women choose sexier and more revealing clothes, shoes and fashion accessories [38]. Men living in towns with a scarcity of women compete harder to acquire luxuries and accumulate more consumer debt [39]. Romantic gift-giving is an important tactic in human courtship and mate retention, especially for men who might be signalling commitment [40]. Green consumerism—preferring eco-friendly products—is an effective form of conspicuous conservation, signalling both status and altruism [41].Findings such as these challenge traditional assumptions in economics. For example, ever since the Marginal Revolution—the development of economic theory during the 1870s—mainstream economics has made the ‘Rational Man'' assumption that consumers maximize their expected utility from their product choices, without reference to what other consumers are doing or desiring. This assumption was convenient both analytically—as it allowed easier mathematical modelling of markets and price equilibria—and ideologically in legitimizing free markets and luxury goods. However, new research from evolutionary consumer psychology and behavioural economics shows that consumers often desire ‘positional goods'' such as prestige-branded luxuries that signal social position and status through their relative cost, exclusivity and rarity. Positional goods create ‘positional externalities''—the harmful social side-effects of runaway status-seeking and consumption arms races [42].…biodiversity seems driven by sexual selection favouring whimsical differences […] Similarly […] diversity in consumer products […] is at the level of arbitrary design…These positional externalities are important because they undermine the most important theoretical justification for free markets—the first fundamental theorem of welfare economics, a formalization of Adam Smith''s ‘invisible hand'' argument, which says that competitive markets always lead to efficient distributions of resources. In the 1930s, the British Marxist biologists Julian Huxley and J.B.S. Haldane were already wary of such rationales for capitalism, and understood that runaway consumerism imposes social and ecological costs on humans in much the same way that runaway sexual ornamentation imposes survival costs and extinction risks on other animals [16]. Evidence shows that consumerist status-seeking leads to economic inefficiencies and costs to human welfare [42]. Runaway consumerism might be one predictable result of a human nature shaped by sexual selection, but we can display desirable traits in many other ways, such as green consumerism, conspicuous charity, ethical investment and through social media such as Facebook [17,43].Future work in evolutionary consumer psychology should give further insights into the links between sex, mutations, evolution and marketing. These links have been important for at least 500 million years and probably sparked the evolution of human intelligence, language, creativity, beauty, morality and ideology. A better understanding of these links could help us nudge global consumerist capitalism into a more sustainable form that imposes lower costs on the biosphere and yields higher benefits for future generations.?
Open in a separate windowGeoffrey Miller 相似文献
14.
15.
EMBO J
31
5, 1062–1079 (2012); published online January172012In this issue of The EMBO Journal, Garg et al (2012) delineate a signalling pathway that leads to calreticulin (CRT) exposure and ATP release by cancer cells that succumb to photodynamic therapy (PTD), thereby providing fresh insights into the molecular regulation of immunogenic cell death (ICD).The textbook notion that apoptosis would always take place unrecognized by the immune system has recently been invalidated (Zitvogel et al, 2010; Galluzzi et al, 2012). Thus, in specific circumstances (in particular in response to anthracyclines, oxaliplatin, and γ irradiation), cancer cells can enter a lethal stress pathway linked to the emission of a spatiotemporally defined combination of signals that is decoded by the immune system to activate tumour-specific immune responses (Zitvogel et al, 2010). These signals include the pre-apoptotic exposure of intracellular proteins such as the endoplasmic reticulum (ER) chaperon CRT and the heat-shock protein HSP90 at the cell surface, the pre-apoptotic secretion of ATP, and the post-apoptotic release of the nuclear protein HMGB1 (Zitvogel et al, 2010). Together, these processes (and perhaps others) constitute the molecular determinants of ICD.In this issue of The EMBO Journal, Garg et al (2012) add hypericin-based PTD (Hyp-PTD) to the list of bona fide ICD inducers and convincingly link Hyp-PTD-elicited ICD to the functional activation of the immune system. Moreover, Garg et al (2012) demonstrate that Hyp-PDT stimulates ICD via signalling pathways that overlap with—but are not identical to—those elicited by anthracyclines, which constitute the first ICD inducers to be characterized (Casares et al, 2005; Zappasodi et al, 2010; Fucikova et al, 2011).Intrigued by the fact that the ER stress response is required for anthracycline-induced ICD (Panaretakis et al, 2009), Garg et al (2012) decided to investigate the immunogenicity of Hyp-PDT (which selectively targets the ER). Hyp-PDT potently stimulated CRT exposure and ATP release in human bladder carcinoma T24 cells. As a result, T24 cells exposed to Hyp-PDT (but not untreated cells) were engulfed by Mf4/4 macrophages and human dendritic cells (DCs), the most important antigen-presenting cells in antitumour immunity. Similarly, murine colon carcinoma CT26 cells succumbing to Hyp-PDT (but not cells dying in response to the unspecific ER stressor tunicamycin) were preferentially phagocytosed by murine JAWSII DCs, and efficiently immunized syngenic BALB/c mice against a subsequent challenge with living cells of the same type. Of note, contrarily to T24 cells treated with lipopolysaccharide (LPS) or dying from accidental necrosis, T24 cells exposed to Hyp-PDT activated DCs while eliciting a peculiar functional profile, featuring high levels of NO production and absent secretion of immunosuppressive interleukin-10 (IL-10) (Garg et al, 2012). Moreover upon co-culture with Hyp-PDT-treated T24 cells, human DCs were found to secrete high levels of IL-1β, a cytokine that is required for the adequate polarization of interferon γ (IFNγ)-producing antineoplastic CD8+ T cells (Aymeric et al, 2010). Taken together, these data demonstrate that Hyp-PDT induces bona fide ICD, eliciting an antitumour immune response.By combining pharmacological and genetic approaches, Garg et al (2012) then investigated the molecular cascades that are required for Hyp-PDT-induced CRT exposure and ATP release. They found that CRT exposure triggered by Hyp-PDT requires reactive oxygen species (as demonstrated with the 1O2 quencher L-histidine), class I phosphoinositide-3-kinase (PI3K) activity (as shown with the chemical inhibitor wortmannin and the RNAi-mediated depletion of the catalytic PI3K subunit p110), the actin cytoskeleton (as proven with the actin inhibitor latrunculin B), the ER-to-Golgi anterograde transport (as shown using brefeldin A), the ER stress-associated kinase PERK, the pro-apoptotic molecules BAX and BAK as well as the CRT cell surface receptor CD91 (as demonstrated by their knockout or RNAi-mediated depletion). However, there were differences in the signalling pathways leading to CRT exposure in response to anthracyclines (Panaretakis et al, 2009) and Hyp-PDT (Garg et al, 2012). In contrast to the former, the latter was not accompanied by the exposure of the ER chaperon ERp57, and did not require eIF2α phosphorylation (as shown with non-phosphorylatable eIF2α mutants), caspase-8 activity (as shown with the pan-caspase blocker Z-VAD.fmk, upon overexpression of the viral caspase inhibitor CrmA and following the RNAi-mediated depletion of caspase-8), and increased cytosolic Ca2+ concentrations (as proven with cytosolic Ca2+ chelators and overexpression of the ER Ca2+ pump SERCA). Moreover, Hyp-PDT induced the translocation of CRT at the cell surface irrespective of retrograde transport (as demonstrated with the microtubular poison nocodazole) and lipid rafts (as demonstrated with the cholesterol-depleting agent methyl-β-cyclodextrine). Of note, ATP secretion in response to Hyp-PDT depended on the ER-to-Golgi anterograde transport, PI3K and PERK activity (presumably due to their role in the regulation of secretory pathways), but did not require BAX and BAK (Garg et al, 2012). Since PERK can stimulate autophagy in the context of ER stress (Kroemer et al, 2010), it is tempting to speculate that autophagy is involved in Hyp-PDT-elicited ATP secretion, as this appears to be to the case during anthracycline-induced ICD (Michaud et al, 2011).Altogether, the intriguing report by Garg et al (2012) demonstrates that the stress signalling pathways leading to ICD depend—at least in part—on the initiating stimulus (Figure 1). Speculatively, this points to the coexistence of a ‘core'' ICD signalling pathway (which would be common to several, if not all, ICD inducers) with ‘private'' molecular cascades (which would be activated in a stimulus-dependent fashion). Irrespective of these details, the work by Garg et al (2012) further underscores the importance of anticancer immune responses elicited by established and experimental therapies.Open in a separate windowFigure 1Molecular mechanisms of immunogenic cell death (ICD). At least three processes underlie the immunogenicity of cell death: the pre-apoptotic exposure of calreticulin (CRT) at the cell surface, the secretion of ATP, and the post-apoptotic release of HMGB1. ICD can be triggered by multiple stimuli, including photodynamic therapy, anthracycline-based chemotherapy, and some types of radiotherapy. The signalling pathways elicited by distinct ICD inducers overlap, but are not identical. In red are indicated molecules and processes that—according to current knowledge—may be required for CRT exposure and ATP secretion in response to most, if not all, ICD inducers. The molecular determinants of the immunogenic release of HMGB1 remain poorly understood. ER, endoplasmic reticulum; P-eIF2α, phosphorylated eIF2α; PI3K, class I phosphoinositide-3-kinase; ROS, reactive oxygen species. 相似文献
16.
The public view of life-extension technologies is more nuanced than expected and researchers must engage in discussions if they hope to promote awareness and acceptanceThere is increasing research and commercial interest in the development of novel interventions that might be able to extend human life expectancy by decelerating the ageing process. In this context, there is unabated interest in the life-extending effects of caloric restriction in mammals, and there are great hopes for drugs that could slow human ageing by mimicking its effects (Fontana et al, 2010). The multinational pharmaceutical company GlaxoSmithKline, for example, acquired Sirtris Pharmaceuticals in 2008, ostensibly for their portfolio of drugs targeting ‘diseases of ageing''. More recently, the immunosuppressant drug rapamycin has been shown to extend maximum lifespan in mice (Harrison et al, 2009). Such findings have stoked the kind of enthusiasm that has become common in media reports of life-extension and anti-ageing research, with claims that rapamycin might be “the cure for all that ails” (Hasty, 2009), or that it is an “anti-aging drug [that] could be used today” (Blagosklonny, 2007).Given the academic, commercial and media interest in prolonging human lifespan—a centuries-old dream of humanity—it is interesting to gauge what the public thinks about the possibility of living longer, healthier lives, and to ask whether they would be willing to buy and use drugs that slow the ageing process. Surveys that have addressed these questions, have given some rather surprising results, contrary to the expectations of many researchers in the field. They have also highlighted that although human life extension (HLE) and ageing are topics with enormous implications for society and individuals, scientists have not communicated efficiently with the public about their research and its possible applications.Given the academic, commercial and media interest in prolonging human lifespan […] it is interesting to gauge what the public thinks about the possibility of living longer, healthier lives…Proponents and opponents of HLE often assume that public attitudes towards ageing interventions will be strongly for or against, but until now, there has been little empirical evidence with which to test these assumptions (Lucke & Hall, 2005). We recently surveyed members of the public in Australia and found a variety of opinions, including some ambivalence towards the development and use of drugs that could slow ageing and increase lifespan. Our findings suggest that many members of the public anticipate both positive and negative outcomes from this work (Partridge 2009a, b, 2010; Underwood et al, 2009).In a community survey of public attitudes towards HLE we found that around two-thirds of a sample of 605 Australian adults supported research with the potential to increase the maximum human lifespan by slowing ageing (Partridge et al, 2010). However, only one-third expressed an interest in using an anti-ageing pill if it were developed. Half of the respondents were not interested in personally using such a pill and around one in ten were undecided.Some proponents of HLE anticipate their research being impeded by strong public antipathy (Miller, 2002, 2009). Richard Miller has claimed that opposition to the development of anti-ageing interventions often exists because of an “irrational public predisposition” to think that increased lifespans will only lead to elongation of infirmity. He has called this “gerontologiphobia”—a shared feeling among laypeople that while research to cure age-related diseases such as dementia is laudable, research that aims to intervene in ageing is a “public menace” (Miller, 2002).We found broad support for the amelioration of age-related diseases and for technologies that might preserve quality of life, but scepticism about a major promise of HLE—that it will delay the onset of age-related diseases and extend an individual''s healthy lifespan. From the people we interviewed, the most commonly cited potential negative personal outcome of HLE was that it would extend the number of years a person spent with chronic illnesses and poor quality of life (Partridge et al, 2009a). Although some members of the public envisioned more years spent in good health, almost 40% of participants were concerned that a drug to slow ageing would do more harm than good to them personally; another 13% were unsure about the benefits and costs (Partridge et al, 2010).…it might be that advocates of HLE have failed to persuade the public on this issueIt would be unwise to label such concerns as irrational, when it might be that advocates of HLE have failed to persuade the public on this issue. Have HLE researchers explained what they have discovered about ageing and what it means? Perhaps the public see the claims that have been made about HLE as ‘too good to be true‘.Results of surveys of biogerontologists suggest that they are either unaware or dismissive of public concerns about HLE. They often ignore them, dismiss them as “far-fetched”, or feel no responsibility “to respond” (Settersten Jr et al, 2008). Given this attitude, it is perhaps not surprising that the public are sceptical of their claims.Scientists are not always clear about the outcomes of their work, biogerontologists included. Although the life-extending effects of interventions in animal models are invoked as arguments for supporting anti-ageing research, it is not certain that these interventions will also extend healthy lifespans in humans. Miller (2009) reassuringly claims that the available evidence consistently suggests that quality of life is maintained in laboratory animals with extended lifespans, but he acknowledges that the evidence is “sparse” and urges more research on the topic (Miller, 2009). In the light of such ambiguity, researchers need to respond to public concerns in ways that reflect the available evidence and the potential of their work, without becoming apostles for technologies that have not yet been developed. An anti-ageing drug that extends lifespan without maintaining quality of life is clearly undesirable, but the public needs to be persuaded that such an outcome can be avoided.The public is also concerned about the possible adverse side effects of anti-ageing drugs. Many people were bemused when they discovered that members of the Caloric Restriction Society experienced a loss of libido and loss of muscle mass as a result of adhering to a low-calorie diet to extend their longevity—for many people, such side effects would not be worth the promise of some extra years of life. Adverse side effects are acknowledged as a considerable potential challenge to the development of an effective life-extending drug in humans (Fontana et al, 2010). If researchers do not discuss these possible effects, then a curious public might draw their own conclusions.Adverse side effects are acknowledged as a considerable potential challenge to the development of an effective life-extending drug in humansSome HLE advocates seem eager to tout potential anti-ageing drugs as being free from adverse side effects. For example, Blagosklonny (2007) has argued that rapamycin could be used to prevent age-related diseases in humans because it is “a non-toxic, well tolerated drug that is suitable for everyday oral administration” with its major “side-effects” being anti-tumour, bone-protecting, and mimicking caloric restriction effects. By contrast, Kaeberlein & Kennedy (2009) have advised the public against using the drug because of its immunosuppressive effects.Aubrey de Grey has called for scientists to provide more optimistic timescales for HLE on several occasions. He claims that public opposition to interventions in ageing is based on “extraordinarily transparently flawed opinions” that HLE would be unethical and unsustainable (de Grey, 2004). In his view, public opposition is driven by scepticism about whether HLE will be possible, and that concerns about extending infirmity, injustice or social harms are simply excuses to justify people''s belief that ageing is ‘not so bad'' (de Grey, 2007). He argues that this “pro-ageing trance” can only be broken by persuading the public that HLE technologies are just around the corner.Contrary to de Grey''s expectations of public pessimism, 75% of our survey participants thought that HLE technologies were likely to be developed in the near future. Furthermore, concerns about the personal, social and ethical implications of ageing interventions and HLE were not confined to those who believed that HLE is not feasible (Partridge et al, 2010).Juengst et al (2003) have rightly pointed out that any interventions that slow ageing and substantially increase human longevity might generate more social, economic, political, legal, ethical and public health issues than any other technological advance in biomedicine. Our survey supports this idea; the major ethical concerns raised by members of the public reflect the many and diverse issues that are discussed in the bioethics literature (Partridge et al, 2009b; Partridge & Hall, 2007).When pressed, even enthusiasts admit that a drastic extension of human life might be a mixed blessing. A recent review by researchers at the US National Institute on Aging pointed to several economic and social challenges that arise from longevity extension (Sierra et al, 2009). Perry (2004) suggests that the ability to slow ageing will cause “profound changes” and a “firestorm of controversy”. Even de Grey (2005) concedes that the development of an effective way to slow ageing will cause “mayhem” and “absolute pandemonium”. If even the advocates of anti-ageing and HLE anticipate widespread societal disruption, the public is right to express concerns about the prospect of these things becoming reality. It is accordingly unfair to dismiss public concerns about the social and ethical implications as “irrational”, “inane” or “breathtakingly stupid” (de Grey, 2004).The breadth of the possible implications of HLE reinforces the need for more discussion about the funding of such research and management of its outcomes ( Juengst et al, 2003). Biogerontologists need to take public concerns more seriously if they hope to foster support for their work. If there are misperceptions about the likely outcomes of intervention in ageing, then biogerontologists need to better explain their research to the public and discuss how their concerns will be addressed. It is not enough to hope that a breakthrough in human ageing research will automatically assuage public concerns about the effects of HLE on quality of life, overpopulation, economic sustainability, the environment and inequities in access to such technologies. The trajectories of other controversial research areas—such as human embryonic stem cell research and assisted reproductive technologies (Deech & Smajdor, 2007)—have shown that “listening to public concerns on research and responding appropriately” is a more effective way of fostering support than arrogant dismissal of public concerns (Anon, 2009).Biogerontologists need to take public concerns more seriously if they hope to foster support for their work?
Open in a separate windowBrad PartridgeOpen in a separate windowJayne LuckeOpen in a separate windowWayne Hall 相似文献
17.
18.
Martinson BC 《EMBO reports》2011,12(8):758-762
Universities have been churning out PhD students to reap financial and other rewards for training biomedical scientists. This deluge of cheap labour has created unhealthy competition, which encourages scientific misconduct.Most developed nations invest a considerable amount of public money in scientific research for a variety of reasons: most importantly because research is regarded as a motor for economic progress and development, and to train a research workforce for both academia and industry. Not surprisingly, governments are occasionally confronted with questions about whether the money invested in research is appropriate and whether taxpayers are getting the maximum value for their investments.…questions about the size and composition of the research workforce have historically been driven by concerns that the system produces an insufficient number of scientistsThe training and maintenance of the research workforce is a large component of these investments. Yet discussions in the USA about the appropriate size of this workforce have typically been contentious, owing to an apparent lack of reliable data to tell us whether the system yields academic ‘reproduction rates'' that are above, below or at replacement levels. In the USA, questions about the size and composition of the research workforce have historically been driven by concerns that the system produces an insufficient number of scientists. As Donald Kennedy, then Editor-in-Chief of Science, noted several years ago, leaders in prestigious academic institutions have repeatedly rung alarm bells about shortages in the science workforce. Less often does one see questions raised about whether too many scientists are being produced or concerns about unintended consequences that may result from such overproduction. Yet recognizing that resources are finite, it seems reasonable to ask what level of competition for resources is productive, and at what level does competition become counter-productive.Finding a proper balance between the size of the research workforce and the resources available to sustain it has other important implications. Unhealthy competition—too many people clamouring for too little money and too few desirable positions—creates its own problems, most notably research misconduct and lower-quality, less innovative research. If an increasing number of scientists are scrambling for jobs and resources, some might begin to cut corners in order to gain a competitive edge. Moreover, many in the science community worry that every publicized case of research misconduct could jeopardize those resources, if politicians and taxpayers become unwilling to invest in a research system that seems to be riddled with fraud and misconduct.The biomedical research enterprise in the USA provides a useful context in which to examine the level of competition for resources among academic scientists. My thesis is that the system of publicly funded research in the USA as it is currently configured supports a feedback system of institutional incentives that generate excessive competition for resources in biomedical research. These institutional incentives encourage universities to overproduce graduate students and postdoctoral scientists, who are both trainees and a cheap source of skilled labour for research while in training. However, once they have completed their training, they become competitors for money and positions, thereby exacerbating competitive pressures.Questions raised about whether too many scientists are being produced or concerns about the unintended consequences of such overproduction are less commonThe resulting scarcity of resources, partly through its effect on peer review, leads to a shunting of resources away from both younger researchers and the most innovative ideas, which undermines the effectiveness of the research enterprise as a whole. Faced with an increasing number of grant applications and the consequent decrease in the percentage of projects that can be funded, reviewers tend to ‘play it safe'' and favour projects that have a higher likelihood of yielding results, even if the research is conservative in the sense that it does not explore new questions. Resource scarcity can also introduce unwanted randomness to the process of determining which research gets funded. A large group of scientists, led by a cancer biologist, has recently mounted a campaign against a change in a policy of the National Institutes of Health (NIH) to allow only one resubmission of an unfunded grant proposal (Wadman, 2011). The core of their argument is that peer reviewers are likely able to distinguish the top 20% of research applications from the rest, but that within that top 20%, distinguishing the top 5% or 10% means asking peer reviewers for a level of precision that is simply not possible. With funding levels in many NIH institutes now within that 5–10% range, the argument is that reviewers are being forced to choose at random which excellent applications do and do not get funding. In addition to the inefficiency of overproduction and excessive competition in terms of their costs to society and opportunity costs to individuals, these institutional incentives might undermine the integrity and quality of science, and reduce the likelihood of breakthroughs.My colleagues and I have expressed such concerns about workforce dynamics and related issues in several publications (Martinson, 2007; Martinson et al, 2005, 2006, 2009, 2010). Early on, we observed that, “missing from current analyses of scientific integrity is a consideration of the wider research environment, including institutional and systemic structures” (Martinson et al, 2005). Our more recent publications have been more specific about the institutional and systemic structures concerned. It seems that at least a few important leaders in science share these concerns.In April 2009, the NIH, through the National Institute of General Medical Sciences (NIGMS), issued a request for applications (RFA) calling for proposals to develop computational models of the research workforce (http://grants.nih.gov/grants/guide/rfa-files/RFA-GM-10-003.html). Although such an initiative might be premature given the current level of knowledge, the rationale behind the RFA seems irrefutable: “there is a need to […] pursue a systems-based approach to the study of scientific workforce dynamics.” Roughly four decades after the NIH appeared on the scene, this is, to my knowledge, the first official, public recognition that the biomedical workforce tends not to conform nicely to market forces of supply and demand, despite the fact that others have previously made such arguments.Early last year, Francis Collins, Director of the NIH, published a PolicyForum article in Science, voicing many of the concerns I have expressed about specific influences that have led to growth rates in the science workforce that are undermining the effectiveness of research in general, and biomedical research in particular. He notes the increasing stress in the biomedical research community after the end of the NIH “budget doubling” between 1998 and 2003, and the likelihood of further disruptions when the American Recovery and Reinvestment Act of 2009 (ARRA) funding ends in 2011. Arguing that innovation is crucial to the future success of biomedical research, he notes the tendency towards conservatism of the NIH peer-review process, and how this worsens in fiscally tight times. Collins further highlights the ageing of the NIH workforce—as grants increasingly go to older scientists—and the increasing time that researchers are spending in itinerant and low-paid postdoctoral positions as they stack up in a holding pattern, waiting for faculty positions that may or may not materialize. Having noted these challenging trends, and echoing the central concerns of a 2007 Nature commentary (Martinson, 2007), he concludes that “…it is time for NIH to develop better models to guide decisions about the optimum size and nature of the US workforce for biomedical research. A related issue that needs attention, though it will be controversial, is whether institutional incentives in the current system that encourage faculty to obtain up to 100% of their salary from grants are the best way to encourage productivity.”Similarly, Bruce Alberts, Editor-in-Chief of Science, writing about incentives for innovation, notes that the US biomedical research enterprise includes more than 100,000 graduate students and postdoctoral fellows. He observes that “only a select few will go on to become independent research scientists in academia”, and argues that “assuming that the system supporting this career path works well, these will be the individuals with the most talent and interest in such an endeavor” (Alberts, 2009).His editorial is not concerned with what happens to the remaining majority, but argues that even among the select few who manage to succeed, the funding process for biomedical research “forces them to avoid risk-taking and innovation”. The primary culprit, in his estimation, is the conservatism of the traditional peer-review system for federal grants, which values “research projects that are almost certain to ‘work''”. He continues, “the innovation that is essential for keeping science exciting and productive is replaced by […] research that has little chance of producing the breakthroughs needed to improve human health.”If an increasing number of scientists are scrambling for jobs and resources, some might begin to cut corners in order to gain a competitive edgeAlthough I believe his assessment of the symptoms is correct, I think he has misdiagnosed the cause, in part because he has failed to identify which influence he is concerned with from the network of influences in biomedical research. To contextualize the influences of concern to Alberts, we must consider the remaining majority of doctorally trained individuals so easily dismissed in his editorial, and further examine what drives the dynamics of the biomedical research workforce.Labour economists might argue that market forces will always balance the number of individuals with doctorates with the number of appropriate jobs for them in the long term. Such arguments would ignore, however, the typical information asymmetry between incoming graduate students, whose knowledge about their eventual job opportunities and career options is by definition far more limited than that of those who run the training programmes. They would also ignore the fact that universities are generally not confronted with the externalities resulting from overproduction of PhDs, and have positive financial incentives that encourage overproduction. During the past 40 years, NIH ‘extramural'' funding has become crucial for graduate student training, faculty salaries and university overheads. For their part, universities have embraced NIH extramural funding as a primary revenue source that, for a time, allowed them to implement a business model based on the interconnected assumptions that, as one of the primary ‘outputs'' or ‘products'' of the university, more doctorally trained individuals are always better than fewer, and because these individuals are an excellent source of cheap, skilled labour during their training, they help to contain the real costs of faculty research.“…the current system has succeeded in maximizing the amount of research […] it has also degraded the quality of graduate training and led to an overproduction of PhDs…”However, it has also made universities increasingly dependent on NIH funding. As recently documented by the economist Paula Stephan, most faculty growth in graduate school programmes during the past decade has occurred in medical colleges, with the majority—more than 70%—in non-tenure-track positions. Arguably, this represents a shift of risk away from universities and onto their faculty. Despite perennial cries of concern about shortages in the research workforce (Butz et al, 2003; Kennedy et al, 2004; National Academy of Sciences et al, 2005) a number of commentators have recently expressed concerns that the current system of academic research might be overbuilt (Cech, 2005; Heinig et al, 2007; Martinson, 2007; Stephan, 2007). Some explicitly connect this to structural arrangements between the universities and NIH funding (Cech, 2005; Collins, 2007; Martinson, 2007; Stephan, 2007).In 1995, David Korn pointed out what he saw as some problematic aspects of the business model employed by Academic Medical Centers (AMCs) in the USA during the past few decades (Korn, 1995). He noted the reliance of AMCs on the relatively low-cost, but highly skilled labour represented by postdoctoral fellows, graduate students and others—who quickly start to compete with their own professors and mentors for resources. Having identified the economic dependence of the AMCs on these inexpensive labour pools, he noted additional problems with the graduate training programmes themselves. “These programs are […] imbued with a value system that clearly indicates to all participants that true success is only marked by the attainment of a faculty position in a high-profile research institution and the coveted status of principal investigator on NIH grants.” Pointing to “more than 10 years of severe supply/demand imbalance in NIH funds”, Korn concluded that, “considering the generative nature of each faculty mentor, this enterprise could only sustain itself in an inflationary environment, in which the society''s investment in biomedical research and clinical care was continuously and sharply expanding.” From 1994 to 2003, total funding for biomedical research in the USA increased at an annual rate of 7.8%, after adjustment for inflation. The comparable rate of growth between 2003 and 2007 was 3.4% (Dorsey et al, 2010). These observations resonate with the now classic observation by Derek J. de Solla Price, from more than 30 years before, that growth in science frequently follows an exponential pattern that cannot continue indefinitely; the enterprise must eventually come to a plateau (de Solla Price, 1963).In May 2009, echoing some of Korn''s observations, Nobel laureate Roald Hoffmann caused a stir in the US science community when he argued for a “de-coupling” of the dual roles of graduate students as trainees and cheap labour (Hoffmann, 2009). His suggestion was to cease supporting graduate students with faculty research grants, and to use the money instead to create competitive awards for which graduate students could apply, making them more similar to free agents. During the ensuing discussion, Shirley Tilghman, president of Princeton University, argued that “although the current system has succeeded in maximizing the amount of research performed […] it has also degraded the quality of graduate training and led to an overproduction of PhDs in some areas. Unhitching training from research grants would be a much-needed form of professional ‘birth control''” (Mervis, 2009).The greying of the NIH research workforce is another important driver of workforce dynamics, and it is integrally linked to the fate of young scientistsAlthough the issue of what I will call the ‘academic birth rate'' is the central concern of this analysis, the ‘academic end-of-life'' also warrants some attention. The greying of the NIH research workforce is another important driver of workforce dynamics, and it is integrally linked to the fate of young scientists. A 2008 news item in Science quoted then 70-year-old Robert Wells, a molecular geneticist at Texas A&M University, “‘if I and other old birds continue to land the grants, the [young scientists] are not going to get them.” He worries that the budget will not be able to support “the 100 people ‘I''ve trained […] to replace me''” (Kaiser, 2008). While his claim of 100 trainees might be astonishing, it might be more astonishing that his was the outlying perspective. The majority of senior scientists interviewed for that article voiced intentions to keep doing science—and going after NIH grants—until someone forced them to stop or they died.Some have looked at the current situation with concern, primarily because of the threats it poses to the financial and academic viability of universities (Korn, 1995; Heinig et al, 2007; Korn & Heinig, 2007), although most of those who express such concerns have been distinctly reticent to acknowledge the role of universities in creating and maintaining the situation. Others have expressed concerns about the differential impact of extreme competition and meagre job prospects on the recruitment, development and career survival of young and aspiring scientists (Freeman et al, 2001; Kennedy et al, 2004; Martinson et al, 2006; Anderson et al, 2007a; Martinson, 2007; Stephan, 2007). There seems to be little disagreement, however, that the system has generated excessively high competition for federal research funding, and that this threatens to undermine the very innovation and production of knowledge that is its raison d''etre.The production of knowledge in science, particularly of the ‘revolutionary'' variety, is generally not a linear input–output process with predictable returns on investment, clear timelines and high levels of certainty (Lane, 2009). On the contrary, it is arguable that “revolutionary science is a high risk and long-term endeavour which usually fails” (Charlton & Andras, 2008). Predicting where, when and by whom breakthroughs in understanding will be produced has proven to be an extremely difficult task. In the face of such uncertainty, and denying the realities of finite resources, some have argued that the best bet is to maximize the number of scientists, using that logic to justify a steady-state production of new PhDs, regardless of whether the labour market is sending signals of increasing or decreasing demand for that supply. Only recently have we begun to explore the effects of the current arrangement on the process of knowledge production, and on innovation in particular (Charlton & Andras, 2008; Kolata, 2009).…most of those who express such concerns have been reticent to acknowledge the role of universities themselves in creating and maintaining the situationBruce Alberts, in the above-mentioned editorial, points to several initiatives launched by the NIH that aim to get a larger share of NIH funding into the hands of young scientists with particularly innovative ideas. These include the “New Innovator Award,” the “Pioneer Award” and the “Transformational R01 Awards”. The proportion of NIH funding dedicated to these awards, however, amounts to “only 0.27% of the NIH budget” (Alberts, 2009). Such a small proportion of the NIH budget does not seem likely to generate a large amount of more innovative science. Moreover, to the extent that such initiatives actually succeed in enticing more young investigators to become dependent on NIH funds, any benefit these efforts have in terms of innovation may be offset by further increases in competition for resources that will come when these new ‘innovators'' reach the end of this specialty funding and add to the rank and file of those scrapping for funds through the standard mechanisms.Our studies on research integrity have been mostly oriented towards understanding how the influences within which academic scientists work might affect their behaviour, and thus the quality of the science they produce (Anderson et al, 2007a, 2007b; Martinson et al, 2009, 2010). My colleagues and I have focused on whether biomedical researchers perceive fairness in the various exchange relationships within their work systems. I am persuaded by the argument that expectations of fairness in exchange relationships have been hard-wired into us through evolution (Crockett et al, 2008; Hsu et al, 2008; Izuma et al, 2008; Pennisi, 2009), with the advent of modern markets being a primary manifestation of this. Thus, violations of these expectations strike me as potentially corrupting influences. Such violations might be prime motivators for ill will, possibly engendering bad-faith behaviour among those who perceive themselves to have been slighted, and therefore increasing the risk of research misconduct. They might also corrupt the enterprise by signalling to talented young people that biomedical research is an inhospitable environment in which to develop a career, possibly chasing away some of the most talented individuals, and encouraging a selection of characteristics that might not lead to optimal effectiveness, in terms of scientific innovation and productivity (Charlton, 2009).To the extent that we have an ecology with steep competition that is fraught with high risks of career failure for young scientists after they incur large costs of time, effort and sometimes financial resources to obtain a doctoral degree, why would we expect them to take on the additional, substantial risks involved in doing truly innovative science and asking risky research questions? And why, in such a cut-throat setting, would we not anticipate an increase in corner-cutting, and a corrosion of good scientific practice, collegiality, mentoring and sociability? Would we not also expect a reduction in high-risk, innovative science, and a reversion to a more career-safe type of ‘normal'' science? Would this not reduce the effectiveness of the institution of biomedical research? I do not claim to know the conditions needed to maximize the production of research that is novel, innovative and conducted with integrity. I am fairly certain, however, that putting scientists in tenuous positions in which their careers and livelihoods would be put at risk by pursuing truly revolutionary research is one way to insure against it. 相似文献
19.
Assisted reproductive technologies enable subfertile couples to have children. But there
are health risks attached for both mothers and children that need to be properly understood
and managed.Assisted reproductive technology (ART) has become a standard intervention for couples with
infertility problems, especially as ART is highly successful and overall carries low risks
[1,2]. The number of
infants born following ART has increased steadily worldwide, with more than 5,000,000 so far
[3]. In industrialized countries, 1–4% of
newborns have been conceived by using ART [4,5], probably owing to the fact that couples frequently delay
childbearing until their late 30s, when fertility decreases in both men and women
[2]. Considering the possibility that male fertility
might be declining, as Richard Sharpe has discussed in this series [6], it is likely that ART will be even more widely used in the future. Yet,
as the rate of ART and the total number of pregnancies has increased, it has become apparent
that ART is associated with potential risks to the mother and fetus. The most commonly cited
health problems pertain to multiple gestation pregnancies and multiple births. More recently,
however, concerns about the risks of birth defects and genetic disorders have been raised.
There are questions about whether the required manipulations and the artificial environments
of gametes and embryos are potentially creating short- and long-term health risks in mothers
and children by interfering with epigenetic reprogramming.Notwithstanding, ART represents a tremendous achievement in human reproductive medicine. The
birth of Louise Brown, the first ‘test tube baby'' in 1978, was the result of the
collaborative work of embryologist Robert Edwards and gynaecologist Patrick Steptoe
[7]. This success was a culmination of many years of
work at universities and clinics worldwide. An initial lack of support, as well as criticism
from ethicists and the church, delayed the opening of the first in vitro fertilization
(IVF) clinic in Bourn Hall near Cambridge until 1980. By 1986, 1,000 children conceived by IVF
at Bourn Hall had been born [8]. In 2010, Edwards
received the Nobel Prize in Medicine for the development of IVF. Regrettably, Steptoe had
passed away in 1988 and could not share the honour.…as the rate of ART and the total number of pregnancies has
increased, it has become apparent that ART is associated with potential risks to mother and
fetusOver the next decades, many improvements in IVF procedures were made to reduce the risks of
adverse effects and increase success rates, including controlled ovarian stimulation, timed
ovulation induction, ultrasound-guided egg retrieval, cryopreservation of embryos and
intracytoplasmic sperm injection (ICSI)—a technique in which a single sperm cell is
injected into an oocyte using a microneedle. In addition, there were further improvements such
as assisted hatching and in media composition, such as sequential media, which allow the in
vitro culture of the embryo to reach the blastocyst stage [8].Current IVF procedures involve multiple steps including ovarian stimulation and monitoring,
oocyte retrieval from the ovary, fertilization in vitro and embryo transfer to the
womb. Whereas the first IVF cycles, including the conception of Louise Brown, used natural
ovulatory cycles, which result in the retrieval of one or two oocytes, most IVF cycles
performed today rely on controlled ovarian stimulation using injectable
gonadotropins—follicle stimulating hormone and luteinizing hormone—in
supraphysiological concentrations for 10–14 days, followed by injection of human
chorionic gonadotropin (hCG) 38–40 h before egg retrieval to trigger ovulation. This
updated protocol makes it possible to grow multiple follicles and to retrieve 10–20
oocytes in one IVF cycle, thereby increasing the number of eggs available for
fertilization.Post-retrieval, the embryologist places an egg and sperm together in a test tube for
fertilization. Alternatively, a single sperm cell can be injected into an egg by using ICSI.
This procedure was initially developed for couples with poor sperm quality [9], but has become the predominant fertilization technique used in
many IVF clinics worldwide [8]. The developing embryos
are monitored by microscopy, and viable embryos are transferred into the woman''s womb for
implantation. Louise Brown, as with many embryos today, was transferred three days after egg
retrieval, at approximately the eight-cell stage. However, using sequential media, many
clinics advocate culturing embryos until day five when they reach the blastocyst stage. The
prolonged culture period allows self-selection of the most viable embryos for transfer and
increases the chance of a viable pregnancy. Excess embryos can be cryopreserved and
transferred at a later date by using a procedure known as frozen embryo transfer (FET). In
this article we use the term ART to refer to IVF procedures with or without ICSI and FET.