首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The diversification of prokaryotes is accelerated by their ability to acquire DNA from other genomes. However, the underlying processes also facilitate genome infection by costly mobile genetic elements. The discovery that cells can uptake DNA by natural transformation was instrumental to the birth of molecular biology nearly a century ago. Surprisingly, a new study shows that this mechanism could efficiently cure the genome of mobile elements acquired through previous sexual exchanges.Horizontal gene transfer (HGT) is a key contributor to the genetic diversification of prokaryotes [1]. Its frequency in natural populations is very high, leading to species’ gene repertoires with relatively few ubiquitous (core) genes and many low-frequency genes (present in a small proportion of individuals). The latter are responsible for much of the phenotypic diversity observed in prokaryotic species and are often encoded in mobile genetic elements that spread between individual genomes as costly molecular parasites. Hence, HGT of interesting traits is often carried by expensive vehicles.The net fitness gain of horizontal gene transfer depends on the genetic background of the new host, the acquired traits, the fitness cost of the mobile element, and the ecological context [2]. A study published in this issue of PLOS Biology [3] proposes that a mechanism originally thought to favor the acquisition of novel DNA—natural transformation—might actually allow prokaryotes to clean their genome of mobile genetic elements.Natural transformation allows the uptake of environmental DNA into the cell (Fig 1). It differs markedly from the other major mechanisms of HGT by depending exclusively on the recipient cell, which controls the expression of the transformation machinery and favors exchanges with closely related taxa [4]. DNA arrives at the cytoplasm in the form of small single-stranded fragments. If it is not degraded, it may integrate the genome by homologous recombination at regions of high sequence similarity (Fig 1). This results in allelic exchange between a fraction of the chromosome and the foreign DNA. Depending on the recombination mechanisms operating in the cell and on the extent of sequence similarity between the transforming DNA and the genome, alternative recombination processes may take place. Nonhomologous DNA flanked by regions of high similarity can be integrated by double homologous recombination at the edges (Fig 1E). Mechanisms mixing homologous and illegitimate recombination require less strict sequence similarity and may also integrate nonhomologous DNA in the genome [5]. Some of these processes lead to small deletions of chromosomal DNA [6]. These alternative recombination pathways allow the bacterium to lose and/or acquire novel genetic information.Open in a separate windowFig 1Natural transformation and its outcomes.The mechanism of environmental DNA uptake brings into the cytoplasm small single-stranded DNA fragments (A). Earlier models for the raison d’être of natural transformation have focused on the role of DNA as a nutrient (B), as a breaker of genetic linkage (C), or as a substrate for DNA repair (D). The chromosomal curing model allows the removal of mobile elements by recombination between conserved sequences at their extremities (E). The model is strongly affected by the size of the incoming DNA fragments, since the probability of uptake of a mobile element rapidly decreases with the size of the element and of the incoming fragments (F). This leads to a bias towards the deletion of mobile elements by recombination, especially the largest ones. In spite of this asymmetry, some mobile elements can integrate the genome via natural transformation, following homologous recombination between large regions of high sequence similarity (G) or homology-facilitated illegitimate recombination in short regions of sequence similarity (H).Natural transformation was the first described mechanism of HGT. Its discovery, in the first half of the 20th century, was instrumental in demonstrating that DNA is the support of genetic information. This mechanism is also regularly used to genetically engineer bacteria. Researchers have thus been tantalized by the lack of any sort of consensus regarding the raison d’être of natural transformation.Croucher, Fraser, and colleagues propose that the small size of recombining DNA fragments arising from transformation biases the outcome of recombination towards the deletion of chromosomal genetic material (Fig 1F). Incoming DNA carrying the core genes that flank a mobile element, but missing the element itself, can provide small DNA fragments that become templates to delete the element from the recipient genome (Fig 1E). The inverse scenario, incoming DNA carrying the core genes and a mobile element absent from the genome, is unlikely due to the mobile element being large and the recombining transformation fragments being small. Importantly, this mechanism most efficiently removes the loci at low frequency in the population because incoming DNA is more likely to lack such intervening sequences when these are rare. Invading mobile genetic elements are initially at low frequencies in populations and will be frequently deleted by this mechanism. Hence, recombination will be strongly biased towards the deletion or inactivation of large mobile elements such as phages, integrative conjugative elements, and pathogenicity islands. Simulations at a population scale show that transformation could even counteract the horizontal spread of mobile elements.An obvious limit of natural transformation is that it can''t cope with mobile genetic elements that rapidly take control of the cell, such as virulent phages, or remain extra-chromosomal, such as plasmids. Another limit of transformation is that it facilitates the acquisition of costly mobile genetic elements [7,8], especially if these are small. When these elements replicate in the genome, as is the case of transposable elements, they may become difficult to remove by subsequent events of transformation. Further work will be needed to quantify the costs associated with such infections.Low-frequency adaptive genes might be deleted through transformation in the way proposed for mobile genetic elements. However, adaptive genes rise rapidly to high frequency in populations, becoming too frequent to be affected by transformation. Interestingly, genetic control of transformation might favor the removal of mobile elements incurring fitness costs while preserving those carrying adaptive traits [3]. Transformation could, thus, effectively cure chromosomes and other replicons of deleterious mobile genetic elements integrated in previous events of horizontal gene transfer while preserving recently acquired genes of adaptive value.Prokaryotes encode an arsenal of immune systems to prevent infection by mobile elements and several regulatory systems to repress their expression [9]. Under the new model (henceforth named the chromosomal curing model), transformation has a key, novel position in this arsenal because it allows the expression of the incoming DNA while subsequently removing deleterious elements from the genome.Mobile elements encode their own tools to evade the host immune systems [9]. Accordingly, they search to affect natural transformation [3]. Some mobile genetic elements integrate at, and thus inactivate, genes encoding the machineries required for DNA uptake or recombination. Other elements express nucleases that degrade exogenous DNA (precluding its uptake). These observations suggest an arms race evolutionary dynamics between the host, which uses natural transformation to cure its genome, and mobile genetic elements, which target these functions for their own protection. This gives further credibility to the hypothesis that transformation is a key player in the intra-genomic conflicts between prokaryotes and their mobile elements.Previous studies have proposed alternative explanations for the evolution of natural transformation, including the possibility that it was caused by selection for allelic recombination and horizontal gene transfer [10], for nutrient acquisition [11], or for DNA repair [12]. The latter hypothesis has recently enjoyed regained interest following observations that DNA-damage agents induce transformation [13,14], along with intriguing suggestions that competence might be advantageous even in the absence of DNA uptake [15,16]. The hypothesis that transformation evolved to acquire nutrients has received less support in recent years.Two key specific traits of transformation—host genetic control of the process and selection for conspecific DNA—share some resemblance with recombination processes occurring during sexual reproduction. Yet, the analogy between the two processes must be handled with care because transformation results, at best, in gene conversion of relatively small DNA fragments from another individual. The effect of sexual reproduction on genetic linkage is thought to be advantageous in the presence of genetic drift or weak and negative or fluctuating epistasis [17]. Interestingly, these conditions could frequently be met by bacterial pathogens [18], which might explain why there are so many naturally transformable bacteria among human pathogens, such as Streptococcus pneumoniae, Helicobacter pylori, Staphylococcus aureus, Haemophilus influenzae, or Neisseria spp. The most frequent criticism to the analogy between transformation and sexual reproduction is that environmental DNA from dead individuals is unlikely to carry better alleles than the living recipient [11]. This difficulty is circumvented in bacteria that actively export copies of their DNA to the extracellular environment. Furthermore, recent theoretical studies showed that competence could be adaptive even when the DNA originates from individuals with lower fitness alleles [19,20]. Mathematically speaking, sexual exchanges with the dead might be better than no exchanges at all.The evaluation of the relative merits of the different models aiming to explain the raison d’être of natural transformation is complicated because they share several predictions. For example, the induction of competence under maladapted environments can be explained by the need for DNA repair (more DNA damage in these conditions), by selection for adaptation (through recombination or HGT), and by the chromosomal curing model because mobile elements are more active under such conditions (leading to more intense selection for their inactivation). Some of the predictions of the latter model—the rapid diversification and loss of mobile elements and their targeting of the competence machinery—can also be explained by models involving competition between mobile elements and their antagonistic association with the host. One of the great uses of mathematical models in biology resides in their ability to pinpoint the range of parameters and conditions within which each model can apply. The chromosomal curing model remains valid under broad ranges of variation of many of its key variables. This might not be the case for alternative models [3].While further theoretical work will certainly help to specify the distinctive predictions of each model, realistic experimental evolutionary studies will be required to test them. Unfortunately, the few pioneering studies on this topic have given somewhat contradictory conclusions. Some showed that natural transformation was beneficial to bacteria adapting under suboptimal environments (e.g., in times of starvation or in stressful environments) [21,22], whereas others showed it was most beneficial under exponential growth and early stationary phase [23]. Finally, at least one study showed a negative effect of transformation on adaptation [24]. Part of these discrepancies might reveal differences between species, which express transformation under different conditions. They might also result from the low intraspecies genetic diversity in these experiments, in which case the use of more representative communities might clarify the conditions favoring transformation.Macroevolutionary studies on natural transformation are hindered by the small number of prokaryotes known to be naturally transformable (82 species, following [25]). In itself, this poses a challenge: if transformation is adaptive, then why does it seem to be so rare? The benefits associated with deletion of mobile elements, with functional innovation, or with DNA repair seem sufficiently general to affect many bacterial species. The trade-offs between cost and benefit of transformation might lead to its selection only when mobile elements are particularly deleterious for a given species or when species face particular adaptive challenges. According to the chromosomal curing model, selection for transformation would be stronger in highly structured environments or when recombination fragments are small. There is also some evidence that we have failed to identify numerous naturally transformable prokaryotes, in which case the question above may lose part of its relevance. Many genomes encode key components of the transformation machinery, suggesting that this process might be more widespread than currently acknowledged [25]. As an illustration, the ultimate model for research in microbiology—Escherichia coli—has only recently been shown to be naturally transformable; the conditions leading to the expression of this trait remain unknown [26].The chromosomal curing model might contribute to explaining other mechanisms shaping the evolution of prokaryotic genomes beyond the removal of mobile elements. Transformation-mediated deletion of genetic material, especially by homology-facilitated illegitimate recombination (Fig 1H), could remove genes involved in the mobility of the genetic elements, facilitating the co-option by the host of functions encoded by mobile genetic elements. Several recent studies have pinpointed the importance of such domestication processes in functional innovation and bacterial warfare [27]. The model might also be applicable to other mechanisms that transfer small DNA fragments between cells. These processes include gene transfer agents [28], extracellular vesicles [29], and possibly nanotubes [30]. The chromosomal curing model might help unravel their ecological and evolutionary impact.  相似文献   

2.
3.
As climate change increasingly threatens agricultural production, expanding genetic diversity in crops is an important strategy for climate resilience in many agricultural contexts. In this Essay, we explore the potential of crop biotechnology to contribute to this diversification, especially in industrialized systems, by using historical perspectives to frame the current dialogue surrounding recent innovations in gene editing. We unearth comments about the possibility of enhancing crop diversity made by ambitious scientists in the early days of recombinant DNA and follow the implementation of this technology, which has not generated the diversification some anticipated. We then turn to recent claims about the promise of gene editing tools with respect to this same goal. We encourage researchers and other stakeholders to engage in activities beyond the laboratory if they hope to see what is technologically possible translated into practice at this critical point in agricultural transformation.

Will gene editing contribute to improved crop diversity and climate resilience? In this Essay, the authors look at lessons from past biotechnology efforts to inform action for the future.

In 1970, a virulent fungal blight decimated the United States corn harvest. This southern corn leaf blight epidemic was linked to a subset of genes that made certain varieties more susceptible than others—genes that also happened to be shared across some 75% of commercial varieties [1,2]. The blight arrived just as scientists concerned about a more general loss of genetic diversity in crop plants, both in the US and abroad, were finally gaining the ear of governments and philanthropies. They called for more and better gene bank facilities and, brandishing blighted maize as the canary in the coal mine, a re-diversification of industrial crops [35].With these concerns as motivation, some researchers pointed to the possibility of increasing genetic diversity among cultivars of a given crop with a brand-new biotechnology: recombinant DNA [6]. These techniques could be used to introduce novel genes into the high yielding but genetically narrow lines dominating commercial markets. But this anticipated use of recombinant DNA technologies for expanding genetic diversity has yet to materialize.The need to diversify crops is coming back into focus due to increasingly urgent climate and nutrition challenges [79]. Diversified agricultural systems are more resilient to climate hazards and can stabilize food production [10]. Increasing genetic diversity, by both widening the genetic bases of commonly cultivated crop species and restoring a greater number of species to cultivation, is therefore a high priority for climate action.Biotechnology is once again offering a path forward. Today’s plant scientists are developing gene editing techniques that could facilitate genetic diversification of commodities like wheat, rice, and maize and potentially support the adoption or continued cultivation of “neglected” crops that have been less often subject to crop breeding and development activities. But will gene editing really generate a diversity boom? Can it upend a pattern of genetic narrowing that breeders and botanists have observed since the late 19th century—a pattern frequently pinpointed as a major source of vulnerability in global agricultural production systems?Excavating comments that reveal an often forgotten subset of early aspirations for recombinant DNA technologies provides insight on contemporary dialogue about gene editing. The history of these technologies illustrates the extent to which diversification depends on much more than a laboratory toolkit. Awareness of past efforts can inform today’s aspirations for and decision-making about the use of crop biotechnologies to enhance genetic diversity.  相似文献   

4.
What explains why some groups of organisms, like birds, are so species rich? And what explains their extraordinary ecological diversity, ranging from large, flightless birds to small migratory species that fly thousand of kilometers every year? These and similar questions have spurred great interest in adaptive radiation, the diversification of ecological traits in a rapidly speciating group of organisms. Although the initial formulation of modern concepts of adaptive radiation arose from consideration of the fossil record, rigorous attempts to identify adaptive radiation in the fossil record are still uncommon. Moreover, most studies of adaptive radiation concern groups that are less than 50 million years old. Thus, it is unclear how important adaptive radiation is over temporal scales that span much larger portions of the history of life. In this issue, Benson et al. test the idea of a “deep-time” adaptive radiation in dinosaurs, compiling and using one of the most comprehensive phylogenetic and body-size datasets for fossils. Using recent phylogenetic statistical methods, they find that in most clades of dinosaurs there is a strong signal of an “early burst” in body-size evolution, a predicted pattern of adaptive radiation in which rapid trait evolution happens early in a group''s history and then slows down. They also find that body-size evolution did not slow down in the lineage leading to birds, hinting at why birds survived to the present day and diversified. This paper represents one of the most convincing attempts at understanding deep-time adaptive radiations.
“It is strikingly noticeable from the fossil record and from its results in the world around us that some time after a rather distinctive new adaptive type has developed it often becomes highly diversified.” – G. G. Simpson ([1], pp. 222–223)
George Gaylord Simpson was the father of modern concepts of adaptive radiation—the diversification of ecological traits in a rapidly speciating group of organisms (Figure 1; [2]). He considered adaptive radiation to be the source of much of the diversity of living organisms on planet earth, in terms of species number, ecology, and body form [1][3]. Yet more than 60 years after Simpson''s seminal work, the exact role of adaptive radiation in generating life''s extraordinary diversity is still an open and fundamental question in evolutionary biology [3],[4].Open in a separate windowFigure 1An example of adaptive radiation and early bursts in rates of speciation and phenotypic evolution.(a) The adaptive radiation of the modern bird clade Vanginae, which shows early rapid speciation, morphological diversity, and diversity in foraging behavior and diet [15],[32]. (b) Hypothetical curve of speciation rates through time that would be expected in adaptive radiation. The exponential decline in speciation rates shows that there was an “early burst” of speciation at the beginning of the clade''s history. (c) Hypothetical curve of rates of phenotypic evolution through time that would be expected in adaptive radiation, also showing an early burst of evolution with high initial rates. Part (a) is reproduced from [32] with permission (under CC-BY) from the Royal Society and the original authors.To address this question, researchers have looked for signatures of past adaptive radiation in the patterns of diversity in nature. In particular, it has been suggested that groups that have undergone adaptive radiation should show an “early-burst” signal in both rates of lineage diversification and phenotypic evolution through time—a pattern in which rates of speciation and phenotypic evolution are fast early in the history of groups and then decelerate over time (Figure 1; [3][5]). These predictions arise from the idea that clades should multiply and diversify rapidly in species number, ecology, and phenotype in an adaptive radiation and that rates of this diversification should decrease later as niches are successively occupied [2].Early bursts have been sought in both fossils and phylogenies. Few fossil studies have discussed their results in the context of adaptive radiation (but see [6]), but they often have found rapid rises in both taxonomic and morphological diversity early in the history of various groups [7], ranging from marine invertebrates [8] to terrestrial mammals [9]. However, fossils often lack the phylogeny needed to model how evolution has proceeded [7]. On the other hand, studies that test for early bursts in currently existing (extant) species typically use phylogenies, which allow us to model past evolution in groups with few or no fossils [5]. Phylogenies have most often been used to test early bursts in speciation (see, e.g., [10]). However, such tests may be misled by past extinction, which will decay the statistical signal of rapid, early diversification [11]. Furthermore, diverse evolutionary scenarios beyond adaptive radiation can give rise to early bursts in speciation [12]. By contrast, studies of phenotypic diversification may be more robust to extinction [13] and they test the distinguishing feature that separates adaptive from nonadaptive radiation [2],[12].Thus, studies of adaptive radiation in extant organisms increasingly have focused on phylogenetic tests of the early-burst model of phenotypic evolution. Some studies show strong support for this prediction in both birds [14],[15] and lizards [5],[16]. However, the most extensive study to date showed almost no support for the early-burst model. In this study, Harmon et al. [17] examined body size in 49 (and shape in 39) diverse groups of animals, including invertebrates, fishes, amphibians, reptiles, birds, and mammals. They found strong support for the early-burst model in only two of these 88 total datasets.This result raises an important question: if adaptive radiation explains most of life''s diversity [1], how is it possible that there is so little phylogenetic evidence for early bursts of phenotypic evolution? One possibility is that early bursts are hard to detect. This can be due to low statistical power in the most commonly employed tests [18]. It may also be due to a lack of precision in the way “early burst” is defined (and thus tested), as the ecological theory of adaptive radiation suggests that the rate of phenotypic evolution will decrease as species diversity increases in a group, not just over time [14],[16]. Indeed, recent studies [14],[16] detected a decline in rates with species diversity in clades that were also in the Harmon et al. [17] study, yet for which no decline over time was detected.A second possible reason for why early-burst patterns are uncommon is more fundamental: the patterns of phenotypic diversity that result from adaptive radiation may be different at large time scales. Many of the best examples of adaptive radiation are in groups that are relatively young, including Darwin''s finches (2.3 million years old [myr]; [19]) and Lake Malawi and Victoria cichlids (2.3 myr; [20]), whereas most groups that are examined for early bursts in phenotypic evolution are much older (e.g., 47 of 49 in Harmon et al. [17]; mean ± sd = 23.8±29.2 myr). So there may be an inherent difference between what unfolds over the relatively short time scales emphasized by Schluter [2] and what one sees at macroevolutionary time scales (see [21] for an in-depth discussion of this idea as it relates to speciation).The time scale over which adaptive radiations unfold has been little explored. As a result, the link between extant diversity and major extinct radiations remains unclear. Simpson [1] believed that adaptive radiation played out at the population level, but that it should manifest itself at larger scales as well—up to phyla (e.g., chordates, arthropods). He suggested that we should see signals of adaptive radiations in large, old clades because they are effectively small-scale adaptive radiation writ large [1]. Under this view, we should see the signal of adaptive radiation even in groups that diversified over vast time scales, particularly if adaptive radiation is as important for explaining life''s diversity as Simpson [1] thought it was.Part of the reason why potential adaptive radiations at deep time scales remain poorly understood is that studies either focus on fossils or phylogenies, but rarely both. In this issue, Benson et al. [22] combine these two types of data to address whether dinosaurs show signs that they adaptively radiated. Unlike most other studies, the temporal scale of the current study is very large—in this case, over 170 million years throughout the Mesozoic era, starting at 240 million years ago in the Triassic period. This characteristic allowed Benson et al. to shed light on deep-time adaptive radiation.The authors estimated body mass from fossils by using measurements of the circumference of the stylopodium shaft (the largest bone of the arm or leg, such as the femur), which shows a consistent scaling relationship with body mass in extant reptiles and mammals [23]. They then combined published phylogenies to obtain a composite phylogeny for the species in their body-size dataset. The authors finally conducted two types of tests of the rate of body-size evolution—tests of early bursts in phenotypic evolution that are the same as those of Harmon et al. [17], as well as an additional less commonly used test that estimates whether differences between estimated body size at adjacent phylogenetic nodes decreases over time.Benson et al. [22] found two striking results. First, in both of their analyses, the early-burst model was strongly supported for most clades of dinosaurs. This early burst began in the Triassic period, indicating that diversification in body size in dinosaurs began before the Triassic-Jurassic mass extinction event would have opened competition-free ecological space (as commonly hypothesized; [24],[25]). Rather, the authors [22] suggest that a key innovation led to this rise in dinosaurs, though it is not clear what this innovation was [26]. In general, though, the finding of an early burst in body-size evolution in most dinosaurs—if a consequence of adaptive evolution—suggests that adaptive radiation may play out over large evolutionary time scales, not just on the short time scales typical of the most well-studied cases of extant groups.Second, one clade—Maniraptora, which is the clade in which modern-day birds are nested—was the only part of the dinosaur phylogeny that did not show such a strong early burst in body-size evolution. Instead, this clade fit a model to a single adaptive peak—an optimum body size, if you will—but also maintained high rates of undirected body-size evolution throughout their history. Benson et al. [22] suggest that this last result connects deep-time adaptive radiation in the dinosaurs, which quickly exhausted the possibility of phenotypic space, with the current radiation in extant birds, which survived to the present day because their constant, high rate of evolution meant that they were constantly undergoing ecological innovation. This gives a glimpse into why modern birds have so many species (an order of magnitude higher than the nonavian dinosaurs) and so much ecological diversity.The use of fossils allowed Benson et al. [22] to address deep-time radiation in dinosaurs and its consequence on present-day bird diversity. Nevertheless, the promise of using fossils to understand adaptive radiation has its limits. The paleontological dataset presented here is exceptional, yet still insufficient to explore major components of adaptive radiations like actual ecological diversification. As in many paleontological studies, Benson et al. used body-size data to represent ecology because body size is one of the few variables that is available for most species. But it is unclear how important body size really is for ecological diversification and niche filling, because body size is important for nearly every aspect of organismal function. Consequently, evolutionary change in body size can result not only from the competition that drives adaptive radiation, but also from predation pressure, reproductive character displacement, and physiological advantages of particular body sizes in a given environment, among other reasons [27].Despite the broad coverage of extinct species presented in Benson et al. [22], the data were insufficient to study another major part of adaptive radiation: early bursts of lineage diversification. While new approaches are becoming available to study diversification with phylogenies containing extinct species [28],[29] or with incomplete fossil data [30], these approaches are limited when many taxa are known from only single occurrences. This is the case in the Benson et al. dataset, and more generally in most fossil datasets.Given that few fossils exist for many extant groups, a major goal for future studies will be the incorporation of incomplete fossil information into analyses primarily focused on traits and clades for which mostly neontological data are available. For example, Slater et al. [31] developed an approach to include fossil information in analyses of phenotypic evolution. They showed that adding just a few fossils (12 fossils in a study of a 135-species clade) drastically increased the power and accuracy of their analyses of extant taxa. Thus, the combination of fossil data and those based on currently living species is important for future studies, as are new approaches that allow analyzing early bursts of lineage diversification along with phenotypic evolution in fossils.So what answers do Benson et al. [22] bring to Simpson''s original question of the importance of adaptive radiation for explaining diversity on earth? The authors present an intriguing and unconventional link between adaptive radiation and the diversity of modern-day birds. They argue that bird diversification was possible because the dinosaur lineage leading to birds did not exhaust niche space, potentially thanks to small body sizes; in contrast, other dinosaur groups adaptively radiated, filled niche space, and thus could not produce the ecological innovation that may have been necessary to survive the Cretaceous-Paleogene mass extinction. This intriguing hypothesis suggests an important role for the relative starting points of successive adaptive radiations in explaining current diversity, giving a new spin to the pivotal question raised by Simpson more than 60 years ago.  相似文献   

5.
The modern evolutionary synthesis codified the idea that species exist as distinct entities because intrinsic reproductive barriers prevent them from merging together. Understanding the origin of species therefore requires understanding the evolution and genetics of reproductive barriers between species. In most cases, speciation is an accident that happens as different populations adapt to different environments and, incidentally, come to differ in ways that render them reproductively incompatible. As with other reproductive barriers, the evolution and genetics of interspecific hybrid sterility and lethality were once also thought to evolve as pleiotripic side effects of adaptation. Recent work on the molecular genetics of speciation has raised an altogether different possibility—the genes that cause hybrid sterility and lethality often come to differ between species not because of adaptation to the external ecological environment but because of internal evolutionary arms races between selfish genetic elements and the genes of the host genome. Arguably one of the best examples supporting a role of ecological adaptation comes from a population of yellow monkey flowers, Mimulus guttatus, in Copperopolis, California, which recently evolved tolerance to soil contaminants from copper mines and simultaneously, as an incidental by-product, hybrid lethality in crosses with some off-mine populations. However, in new work, Wright and colleagues show that hybrid lethality is not a pleiotropic consequence of copper tolerance. Rather, the genetic factor causing hybrid lethality is tightly linked to copper tolerance and spread to fixation in Copperopolis by genetic hitchhiking.New species arise when populations gradually evolve intrinsic reproductive barriers to interbreeding with other populations [1][3]. Two species can be reproductively isolated from one another in ways that prevent the formation of interspecific hybrids—the species may, for instance, have incompatible courtship signals or occupy different ecological habitats. Two species can also be reproductively isolated from one another if interspecific hybrids are formed but are somehow unfit—the hybrids may be sterile, inviable, or may simply fall between parental ecological niches. All forms of reproductive isolation limit the genetic exchange between species, preventing their fusion and facilitating their further divergence. Understanding the genetic and evolutionary basis of speciation—a major cause of biodiversity—therefore involves understanding the genetics and evolutionary basis of the traits that mediate reproductive isolation.Most reproductive barriers arise as incidental by-products of selection—either ecological adaptation or sexual selection. For these cases, the genetic basis of speciation is, effectively, the genetics of adaptation. But hybrid sterility and lethality have historically posed two special problems. Darwin [4] devoted an entire chapter of his Origin of Species to the first problem: as the sterility or lethality of hybrids provides no advantage to parents, how could the genetic factors involved possibly evolve by natural selection? The second problem was recognized much later [5], after the rediscovery of Mendelian genetics: if two species (with genotypes AA and aa) produce, say, sterile hybrids (Aa) due to an incompatibility between the A and a alleles, then how could, e.g., the AA genotype have evolved from an aa ancestor in the first place without passing through a sterile intermediate genotype (Aa)? Not only does natural selection not directly favor the evolution of hybrid sterility or lethality, but there is reason to believe natural selection positively prevents its evolution.Together these problems stymied evolutionists and geneticists for decades. T.H. Huxley [6] and William Bateson [5], writing decades apart, each branded the evolution of hybrid sterility one of the most serious challenges for a then-young evolutionary theory. Darwin had, in fact, offered a simple solution to the first problem. Namely, hybrid sterility and lethality are not advantageous per se but rather “incidental on other acquired differences" [4]. Then Bateson [5], in a few short, forgotten lines solved the second problem (see [7]). Later, Dobzhansky [2] and Muller [8] would arrive at the same solution, showing that hybrid sterility or lethality could evolve readily, unopposed by natural selection, under a two-locus model with epistasis. In particular, they imagined that separate populations diverge from a common ancestor (genotype aabb), with the A allele becoming established in one population (AAbb) and the B allele in the other (aaBB); while A and B alleles must function on their respective genetic backgrounds, there is no guarantee that the A and B alleles will be functionally compatible with one another. Hybrid sterility and lethality most likely result from incompatible complementary genetic factors that disrupt development when brought together in a common hybrid genome. Dobzhansky [2] and Muller [8] could point to a few supporting data in fish, flies, and plants. Notably, like Darwin, neither speculated on the forces responsible for the evolution of the genetic factors involved.Today, there is no doubt that the Dobzhansky-Muller model is correct, as the data for incompatible complementary genetic factors is now overwhelming [1],[9]. In the last decade, a fast-growing number of speciation genes involved in these genetic incompatibilities have been identified in mice, fish, flies, yeast, and plants [9][11]. Perhaps not surprisingly, these speciation genes often have histories of recurrent, adaptive protein-coding sequence evolution [10],[11]. The signature of selection at speciation genes has been taken by some as tacit evidence for the pervasive role of ecological adaptation in speciation, including the evolution of hybrid sterility and lethality [12]. What is surprising, however, from the modern molecular analysis of speciation genes is how often their rapid sequence evolution and functional divergence seems to have little to do with adaptation to external ecological circumstances. Instead, speciation genes often (but not always [9][11]) seem to evolve as by-products of evolutionary arms races between selfish genetic elements—e.g., satellite DNAs [13],[14], meiotic drive elements [15], cytoplasmic male sterility factors [16]—and the host genes that regulate or suppress them [9][11],[17]. The notion that selfish genes are exotic curiosities is now giving way to a realization that selfish genes are common and diverse, each generation probing for transmission advantages at the expense of their bearers, fueling evolutionary arms races and, not infrequently, contributing to the genetic divergence that drives speciation. Indeed, the case has become so strong that examples of hybrid sterility and lethality genes that have evolved in response to ecological challenges (other than pathogens) appear to be the exception [9],[11],[17].Perhaps the most clear-cut case in which a genetic incompatibility seems to have evolved as a by-product of ecological adaptation comes from populations of the yellow monkey flower, Mimulus guttatus, from Copperopolis (California, U.S.A.). In the last ∼150 years, the Copperopolis population has evolved tolerance to the tailings of local copper mines (Figure 1). These copper-tolerant M. guttatus plants also happen to be partially reproductively isolated from many off-mine M. guttatus plants, producing hybrids that suffer tissue necrosis and death. In classic work, Macnair and Christie showed that copper tolerance is controlled by a single major factor [18] and hybrid lethality, as expected under the Dobzhansky-Muller model, by complementary factors [19]. Surprisingly, in crosses between tolerant and nontolerant plants, hybrid lethality perfectly cosegregates with tolerance [19],[20]. The simplest explanation is that the copper tolerance allele that spread to fixation in the Copperopolis population also happens to cause hybrid lethality as a pleiotropic by-product. The alternative explanation is that the copper tolerance and hybrid lethality loci happen to be genetically linked; when the copper tolerance allele spread to fixation in Copperopolis, hybrid lethality hitchhiked to high frequency along with it [20]. But with 2n = 28 chromosomes, the odds that copper tolerance and hybrid lethality alleles happen to be linked would seem vanishingly small [20].Open in a separate windowFigure 1Yellow monkey flowers (Mimulus guttatus) growing in the heavy-metal contaminated soils of copper-mine tailings.In this issue, Wright and colleagues [21] revisit this classic case of genetic incompatibility as a by-product of ecological adaptation. They make two discoveries, one genetic and the other evolutionary. By conducting extensive crossing experiments and leveraging the M. guttatus genome sequence (www.mimulusevolution.org), Wright et al. [21] map copper tolerance and hybrid necrosis to tightly linked but genetically separable loci, Tol1 and Nec1, respectively. Hybrid lethality is not a pleiotropic consequence of copper tolerance. Instead, the tolerant Tol1 allele spread to fixation in Copperopolis, and the tightly linked incompatible Nec1 allele spread with it by genetic hitchhiking. In a turn of bad luck, the loci happen to fall in a heterochromatic pericentric region, where genome assemblies are often problematic, putting identification of the Tol1 and Nec1 genes out of immediate reach. Wright et al. [21] were, however, able to identify linked markers within ∼0.3 cM of Tol1 and place Nec1 within a 10-kb genomic interval that contains a Gypsy3 retrotransposon, raising two possibilities. First, the Gypsy3 element is unlikely to cause hybrid lethality directly; instead, as transposable elements are often epigenetically silenced in plants, it seems possible that the Nec1-associated Gypsy3 is silenced with incidental consequences for gene expression on a gene (or genes) in the vicinity [22]. Second, although the Nec1 interval is 10-kb in the reference genome of M. guttatus, it could be larger in the (not-yet-sequenced) Copperopolis population, perhaps harboring additional genes.With Tol1 and Nec1 mapped near and to particular genomic scaffolds, respectively, Wright et al. were able to investigate the evolutionary history of the genomic region. Given the clear adaptive significance of copper tolerance in Copperopolis plants, we might expect to see the signatures of a strong selective sweep in the Tol1 region—a single Tol1 haplotype may have spread to fixation so quickly that all Copperopolis descendant plants bear the identical haplotype and thus show strongly reduced population genetic variability in the Tol1-Nec1 region relative to the rest of the genome [23],[24]. After the selective sweep is complete, variability in the region ought to recover gradually as new mutations arise and begin to fill out the mutation-drift equilibrium frequency spectrum expected for neutral variation in the Copperopolis population [25],[26]. Given that Tol1 reflects an adaptation to mine tailings established just ∼150 generations ago, there would have been little time for such a recovery. And yet, while Wright et al. find evidence of moderately reduced genetic variability in the Tol1-Nec1 genomic region, the magnitude of the reduction is hardly dramatic relative to the genome average.How, then, is it possible that the Tol1-Nec1 region swept to fixation in Copperopolis in fewer than ∼150 generations and yet left no strong footprint of a hitchhiking event? One possibility is that rather than a single, unique Tol1-Nec1 haplotype contributing to fixation, causing a “hard sweep," multiple Tol1-Nec1 haplotypes sampled from previously standing genetic variation contributed to fixation, causing a “soft sweep" [27]. A soft sweep would be plausible if Tol1 and Nec1 both segregate in the local off-mine ancestral population and if the two were, coincidentally, found on the same chromosome more often than expected by chance (i.e., in linkage disequilibrium). Then, after the copper mines were established, multiple plants with multiple Tol1 haplotypes (and, by association, Nec1) could have colonized the newly contaminated soils of the mine tailings. Tol1 segregates at ∼9% in surrounding populations, suggesting that standing genetic variation for copper tolerance may well have been present in the ancestral populations.Two big questions remain for the Tol1-Nec1 story, and both would be readily advanced by identification of Tol1 and Nec1. The first question concerns the history of Tol1 haplotypes in Copperopolis and surrounding off-mine populations. As Nec1-mediated hybrid lethality is incomplete, the ∼9% Tol1 frequency in surrounding populations could reflect its export via gene flow from the Copperopolis populations. Conversely, if there was a soft sweep from standing Tol1 variation in surrounding off-mine populations, then Tol1 and Nec1 may still be in linkage disequilibrium in those populations (assuming ∼150 years of recombination has not broken up the association). Resolving these alternative possibilities is a matter of establishing the history of movement of Tol1 haplotypes into or out of the Copperopolis population. The soft sweep scenario, if correct, presents a population genetics puzzle: during the historical time that mutations accumulated among the multiple tolerant but incompatible Tol1-Nec1 haplotypes in the ancestral off-mine populations, why did recombination fail to degrade the association, giving rise to tolerant but compatible haplotypes?The second question concerns the identity of Nec1 (or if it really is a Gypsy3 element, the identity of the nearby gene whose expression is disrupted as a consequence). The answer bears on one of the new emerging generalizations about genetic incompatibilities in plants [9]. Recently, Bomblies and Weigel [28] synthesized a century''s worth of observations on the commonly seen necrosis phenotype in plant hybrids and, based on their own genetic analyses in Arabidopsis [29], suggested that many of these cases may have a common underlying basis: incompatibilities between plant pathogen resistance genes can cause autoimmune responses that result in tissue necrosis and hybrid lethality. Hybrid necrosis, indeed, appears to involve pathogen resistance genes across multiple plants groups [9],[28]. It remains to be seen if the Nec1-mediated lethality provides yet another instance.  相似文献   

6.
7.
Coral reefs on remote islands and atolls are less exposed to direct human stressors but are becoming increasingly vulnerable because of their development for geopolitical and military purposes. Here we document dredging and filling activities by countries in the South China Sea, where building new islands and channels on atolls is leading to considerable losses of, and perhaps irreversible damages to, unique coral reef ecosystems. Preventing similar damage across other reefs in the region necessitates the urgent development of cooperative management of disputed territories in the South China Sea. We suggest using the Antarctic Treaty as a positive precedent for such international cooperation.Coral reefs constitute one of the most diverse, socioeconomically important, and threatened ecosystems in the world [13]. Coral reefs harbor thousands of species [4] and provide food and livelihoods for millions of people while safeguarding coastal populations from extreme weather disturbances [2,3]. Unfortunately, the world’s coral reefs are rapidly degrading [13], with ~19% of the total coral reef area effectively lost [3] and 60% to 75% under direct human pressures [3,5,6]. Climate change aside, this decline has been attributed to threats emerging from widespread human expansion in coastal areas, which has facilitated exploitation of local resources, assisted colonization by invasive species, and led to the loss and degradation of habitats directly and indirectly through fishing and runoff from agriculture and sewage systems [13,57]. In efforts to protect the world’s coral reefs, remote islands and atolls are often seen as reefs of “hope,” as their isolation and uninhabitability provide de facto protection against direct human stressors, and may help impacted reefs through replenishment [5,6]. Such isolated reefs may, however, still be vulnerable because of their geopolitical and military importance (e.g., allowing expansion of exclusive economic zones and providing strategic bases for military operations). Here we document patterns of reclamation (here defined as creating new land by filling submerged areas) of atolls in the South China Sea, which have resulted in considerable loss of coral reefs. We show that conditions are ripe for reclamation of more atolls, highlighting the need for international cooperation in the protection of these atolls before more unique and ecologically important biological assets are damaged, potentially irreversibly so.Studies of past reclamations and reef dredging activities have shown that these operations are highly deleterious to coral reefs [8,9]. First, reef dredging affects large parts of the surrounding reef, not just the dredged areas themselves. For example, 440 ha of reef was completely destroyed by dredging on Johnston Island (United States) in the 1960s, but over 2,800 ha of nearby reefs were also affected [10]. Similarly, at Hay Point (Australia) in 2006 there was a loss of coral cover up to 6 km away from dredging operations [11]. Second, recovery from the direct and indirect effects of dredging is slow at best and nonexistent at worst. In 1939, 29% of the reefs in Kaneohe Bay (United States) were removed by dredging, and none of the patch reefs that were dredged had completely recovered 30 years later [12]. In Castle Harbour (Bermuda), reclamation to build an airfield in the early 1940s led to limited coral recolonization and large quantities of resuspended sediments even 32 years after reclamation [13]; several fish species are claimed extinct as a result of this dredging [14,15]. Such examples and others led Hatcher et al. [8] to conclude that dredging and land clearing, as well as the associated sedimentation, are possibly the most permanent of anthropogenic impacts on coral reefs.The impacts of dredging for the Spratly Islands are of particular concern because the geographical position of these atolls favors connectivity via stepping stones for reefs over the region [1619] and because their high biodiversity works as insurance for many species. In an extensive review of the sparse and limited data available for the region, Hughes et al. [20] showed that reefs on offshore atolls in the South China Sea were overall in better condition than near-shore reefs. For instance, by 2004 they reported average coral covers of 64% for the Spratly Islands and 68% for the Paracel Islands. By comparison, coral reefs across the Indo-Pacific region in 2004 had average coral covers below 25% [21]. Reefs on isolated atolls can still be prone to extensive bleaching and mortality due to global climate change [22] and, in the particular case of atolls in the South China Sea, the use of explosives and cyanine [20]. However, the potential for recovery of isolated reefs to such stressors is remarkable. Hughes et al. [20] documented, for instance, how coral cover in several offshore reefs in the region declined from above 80% in the early 1990s to below 6% by 1998 to 2001 (due to a mixture of El Niño and damaging fishing methods that make use of cyanine and explosives) but then recovered to 30% on most reefs and up to 78% in some reefs by 2004–2008. Another important attribute of atolls in the South China Sea is the great diversity of species. Over 6,500 marine species are recorded for these atolls [23], including some 571 reef coral species [24] (more than half of the world’s known species of reef-building corals). The relatively better health and high diversity of coral reefs in atolls over the South China Sea highlights the uniqueness of such reefs and the important roles they may play for reefs throughout the entire region. Furthermore, these atolls are safe harbor for some of the last viable populations of highly threatened species (e.g., Bumphead Parrotfish [Bolbometopon muricatum] and several species of sawfishes [Pristis, Anoxypristis]), highlighting how dredging in the South China Sea may threaten not only species with extinction but also the commitment by countries in the region to biodiversity conservation goals such as the Convention of Biological Diversity Aichi Targets and the United Nations Sustainable Development Goals.Recently available remote sensing data (i.e., Landsat 8 Operational Land Imager and Thermal Infrared Sensors Terrain Corrected images) allow quantification of the sharp contrast between the gain of land and the loss of coral reefs resulting from reclamation in the Spratly Islands (Fig 1). For seven atolls recently reclaimed by China in the Spratly Islands (names provided in Fig 1D, S1 Data for details); the area of reclamation is the size of visible areas in Landsat band 6, as prior to reclamation most of the atolls were submerged, with the exception of small areas occupied by a handful of buildings on piers (note that the amount of land area was near zero at the start of the reclamation; Fig 1C, S1 Data). The seven reclaimed atolls have effectively lost ~11.6 km2 (26.9%) of their reef area for a gain of ~10.7 km2 of land (i.e., >75 times increase in land area) from February 2014 to May 2015 (Fig 1C). The area of land gained was smaller than the area of reef lost because reefs were lost not only through land reclamation but also through the deepening of reef lagoons to allow boat access (Fig 1B). Similar quantification of reclamation by other countries in the South China Sea (Fig 1Reclamation leads to gains of land in return for losses of coral reefs: A case example of China’s recent reclamation in the Spratly Islands.Table 1List of reclaimed atolls in the Spratly Islands and the Paracel Islands.The impacts of reclamation on coral reefs are likely more severe than simple changes in area, as reclamation is being achieved by means of suction dredging (i.e., cutting and sucking materials from the seafloor and pumping them over land). With this method, reefs are ecologically degraded and denuded of their structural complexity. Dredging and pumping also disturbs the seafloor and can cause runoff from reclaimed land, which generates large clouds of suspended sediment [11] that can lead to coral mortality by overwhelming the corals’ capacity to remove sediments and leave corals susceptible to lesions and diseases [7,9,25]. The highly abrasive coralline sands in flowing water can scour away living tissue on a myriad of species and bury many organisms beyond their recovery limits [26]. Such sedimentation also prevents new coral larvae from settling in and around the dredged areas, which is one of the main reasons why dredged areas show no signs of recovery even decades after the initial dredging operations [9,12,13]. Furthermore, degradation of wave-breaking reef crests, which make reclamation in these areas feasible, will result in a further reduction of coral reefs’ ability to (1) self-repair and protect against wave abrasion [27,28] (especially in a region characterized by typhoons) and (2) keep up with rising sea levels over the next several decades [29]. This suggests that the new islands would require periodic dredging and filling, that these reefs may face chronic distress and long-term ecological damage, and that reclamation may prove economically expensive and impractical.The potential for land reclamation on other atolls in the Spratly Islands is high, which necessitates the urgent development of cooperative management of disputed territories in the South China Sea. First, the Spratly Islands are rich in atolls with similar characteristics to those already reclaimed (Fig 1D); second, there are calls for rapid development of disputed territories to gain access to resources and increase sovereignty and military strength [30]; and third, all countries with claims in the Spratly Islands have performed reclamation in this archipelago (20]. One such possibility is the generation of a multinational marine protected area [16,17]. Such a marine protected area could safeguard an area of high biodiversity and importance to genetic connectivity in the Pacific, in addition to promoting peace in the region (extended justification provided by McManus [16,17]). A positive precedent for the creation of this protected area is that of Antarctica, which was also subject to numerous overlapping claims and where a recently renewed treaty froze national claims, preventing large-scale ecological damage while providing environmental protection and areas for scientific study. Development of such a legal framework for the management of the Spratly Islands could prevent conflict, promote functional ecosystems, and potentially result in larger gains (through spillover, e.g. [31]) for all countries involved.  相似文献   

8.
It was recently proposed that long-term population studies be exempted from the expectation that authors publicly archive the primary data underlying published articles. Such studies are valuable to many areas of ecological and evolutionary biological research, and multiple risks to their viability were anticipated as a result of public data archiving (PDA), ultimately all stemming from independent reuse of archived data. However, empirical assessment was missing, making it difficult to determine whether such fears are realistic. I addressed this by surveying data packages from long-term population studies archived in the Dryad Digital Repository. I found no evidence that PDA results in reuse of data by independent parties, suggesting the purported costs of PDA for long-term population studies have been overstated.Data are the foundation of the scientific method, yet individual scientists are evaluated via novel analyses of data, generating a potential conflict of interest between a research field and its individual participants that is manifested in the debate over access to the primary data underpinning published studies [15]. This is a chronic issue but has become more acute with the growing expectation that researchers publish the primary data underlying research reports (i.e., public data archiving [PDA]). Studies show that articles publishing their primary data are more reliable and accrue more citations [6,7], but a recent opinion piece by Mills et al. [2] highlighted the particular concerns felt by some principal investigators (PIs) of long-term population studies regarding PDA, arguing that unique aspects of such studies render them unsuitable for PDA. The "potential costs to science" identified by Mills et al. [2] as arising from PDA are as follows:
  • Publication of flawed research resulting from a "lack of understanding" by independent researchers conducting analyses of archived data
  • Time demands placed on the PIs of long-term population studies arising from the need to correct such errors via, e.g., published rebuttals
  • Reduced opportunities for researchers to obtain the skills needed for field-based data collection because equivalent long-term population studies will be rendered redundant
  • Reduced number of collaborations
  • Inefficiencies resulting from repeated assessment of a hypothesis using a single dataset
Each "potential cost" is ultimately predicated on the supposition that reuse of archived long-term population data is common, yet the extent to which this is true was not evaluated. To assess the prevalence of independent reuse of archived data—and thereby examine whether the negative consequences of PDA presented by Mills et al. [2] may be realised—I surveyed datasets from long-term population studies archived in the Dryad Digital Repository (hereafter, Dryad). Dryad is an online service that hosts data from a broad range of scientific disciplines, but its content is dominated by submissions associated with ecological and evolutionary biological research [8]. I examined all the Dryad packages associated with studies from four journals featuring ecological or evolutionary research: The American Naturalist, Evolution, Journal of Evolutionary Biology, and Proceedings of the Royal Society B: Biological Sciences (the latter referred to hereafter as Proceedings B). These four journals together represent 23.3% of Dryad''s contributed packages (as of early February 2016). Mills et al. [2] refer to short- versus long-term studies but do not provide a definition of this dichotomy. However, the shortest study represented by their survey lasted for 5 years, so I used this as the minimum time span for inclusion in my survey. This cut-off seems reasonable, as it will generally exclude studies resulting from single projects, such that included datasets likely relate to studies resulting from a sustained commitment on the part of researchers—although one included package contains data gathered via “citizen science” [9], and two others contain data derived from archived human population records [10,11]. However, as these datasets cover extended time spans and were used to address ecological questions [1214], they were retained in my survey sample. Following Mills et al. [2], my focus was on population studies conducted in natural (or seminatural) settings, so captive populations were excluded. Because I was assessing the reuse of archived data, I excluded packages published by Dryad after 2013: authors can typically opt to impose a 1-year embargo, and articles based on archived data will themselves take some time to be written and published.Of the 1,264 archived data packages linked to one of the four journals and published on the Dryad website before 2014, 72 were identified as meeting the selection criteria. This sample represents a diverse range of taxa (Fig 1) and is comparable to the 73 studies surveyed by Mills et al. [2], although my methodology permits individual populations to be represented more than once, since the survey was conducted at the level of published articles (S1 Table). Of these 72 data packages, five had long-term embargoes remaining active (three packages with 5-year embargoes [1517]; two packages with 10-year embargoes [18,19]). For two of these [17,19], the time span of the study could not be estimated because this information is not provided in the associated articles [20,21]. For a third package [22], the archived data indicated 10 years were represented (dummy coding was used to disguise factor level identities, including for year), yet the text of the associated paper suggests data collection covered a considerably greater time span [23]. However, since the study period is not stated in the text, I followed the archived data [22] in assuming data collection spanned a 10-year period. The distribution of study time spans is shown in Fig 2.Open in a separate windowFig 1Taxonomic representation of the 72 data packages included in the survey.The number of packages for each taxon is given in parentheses (note: one data package included data describing both insects and plants [9], while other data packages represented multiple species within a single taxonomic category).Open in a separate windowFig 2The study periods of the 70 data packages included in the survey for which this could be calculated.For each year from 2000 to 2004, these four journals contributed no more than a single data package to Dryad between them. However, around the time that the Joint Data Archiving Policy (JDAP; [24]) was adopted by three of these, we see a surge in PDA by ecologists and evolutionary biologists (Fig 3), such that in 2015 these four journals were collectively represented by 709 data packages. Of course, Mills et al. [2] argue against mandatory archiving of primary data for long-term studies in particular. For this subset of articles published in these four journals, the same pattern is observed: prior to adoption of the JDAP, only two data packages associated with long-term studies had been archived in Dryad, but following the implementation of the JDAP as a condition of publication in The American Naturalist, Evolution, and Journal of Evolutionary Biology, there is a rapid increase in the number of data packages being archived, despite the continuing availability of alternative venues should authors wish to avoid the purported costs of PDA as Mills et al. [2] contend. As the editorial policy of Proceedings B has shifted towards an increasingly strong emphasis on PDA (it is now mandatory), there has similarly been an increase in the representation of articles from this journal in Dryad, both overall (Fig 3) and for long-term studies in particular (Fig 4). These observations suggest that authors rarely chose to publicly archive their data prior to the adoption of PDA policies by journals and that uptake of PDA spread rapidly once it became a prerequisite for publication. In this respect, researchers using long-term population studies are no different to those in other scientific fields, despite the assertion by Mills et al. [2] that they are a special case owing to the complexity of their data. In reality, researchers in many other scientific disciplines also seek to identify relationships within complex systems. Within neuroscience, for example, near-identical objections to PDA were raised at the turn of the century [25], while archiving of genetic and protein sequences by molecular biologists has yielded huge advances but was similarly resisted until revised journal policies stimulated a change in culture [1,26].Open in a separate windowFig 3Total number of data packages archived in the Dryad Digital Repository each year for four leading journals within ecology and evolutionary biology.Arrow indicates when the Joint Data Archiving Policy (JDAP) was adopted by Evolution, Journal of Evolutionary Biology, and The American Naturalist. Note that because data packages are assigned a publication date by Dryad prior to journal publication (even if an embargo is imposed), some data packages will have been published in the year preceding the journal publication of their associated article.Open in a separate windowFig 4Publication dates of the 72 data packages from long-term study populations that were included in the survey.A primary concern raised by opponents of PDA is that sharing their data will see them “scooped” by independent researchers [6,8,2730]. To quantify this risk for researchers maintaining long-term population studies, I used the Web of Science (wok.mimas.ac.uk) to search for citations of each data package (as of November 2015). For the 67 Dryad packages that were publicly accessible, none were cited by any article other than that from which it was derived. However, archived data could conceivably have been reused without the data package being cited, so I examined all journal articles that cited the study report associated with each data package (median citation count: 9; range: 0–58). Although derived metrics from the main articles were occasionally included in quantitative reviews [31,32] or formal meta-analyses [33], I again found no examples of the archived data being reused by independent researchers. As a third approach, I emailed the corresponding author(s) listed for each article, to ask if they were themselves aware of any examples. The replies I received (n = 35) confirmed that there were no known cases of long-term population data being independently reused in published articles. The apparent concern of some senior researchers that PDA will see them "collect data for 30 years just to be scooped" [30] thus lacks empirical support. It should also be noted that providing primary data upon request precedes PDA as a condition of acceptance for most major scientific journals [8]. PDA merely serves to ensure that authors meet this established commitment, a step made necessary by the failure rate that is otherwise observed, even after the recent revolution in communications technology [3436]. As my survey shows, in practice the risk of being scooped is a monster under the bed: empirical assessment fails to justify the level of concern expressed. While long-term population studies are unquestionably a highly valuable resource for ecologists [2,3739] and will likely continue to face funding challenges [3739], there is no empirical support for the contention of Mills et al. [2] that PDA threatens their viability, although this situation may deserve reassessment in the future if the adoption of PDA increases within ecology and evolutionary biology. Nonetheless, in the absence of assessments over longer time frames (an inevitable result of the historical reluctance to adopt PDA), my survey results raise doubts over the validity of arguments favouring extended embargoes for archived data [29,40], and particularly the suggestion that multidecadal embargoes should be facilitated for long-term studies [2,41].Authors frequently assert that unique aspects of their long-term study render it especially well suited to addressing particular issues. Such claims contradict the suggestion that studies will become redundant if PDA becomes the norm [2] while simultaneously highlighting the necessity of making primary data available for meaningful evaluation of results. For research articles relying on data collected over several decades, independent replication is clearly impractical, such that reproducibility (the ability for a third party to replicate the results exactly [42]) is rendered all the more crucial. Besides permitting independent validation of the original results, PDA allows assessment of the hypotheses using alternative analytical methods (large datasets facilitate multiple analytical routes to test a single biological hypothesis, which likely contributes to poor reproducibility [43]) and reassessment if flaws in the original methodology later emerge [44]. Although I was not attempting to use archived data to replicate published results, and thus did not assess the contents of each package in detail, at least six packages [10,4549] failed to provide the primary data underlying their associated articles, including a quantitative genetic study [50] for which only pedigree information was archived [47]. This limits exploration of alternative statistical approaches to the focal biological hypothesis and impedes future applications of the data that may be unforeseeable by the original investigators (a classic example being Bumpus'' [51] dataset describing house sparrow survival [52]), but it seems to be a reality of PDA within ecology and evolution at present [53].The "solutions" proffered by Mills et al. [2] are, in reality, alternatives to PDA that would serve to maintain the status quo with respect to data accessibility for published studies (i.e., subject to consent from the PI). This is a situation that is widely recognised to be failing with respect to the availability of studies'' primary data [3436,54]. Indeed, for 19% (13 of 67 nonembargoed studies) of the articles represented in my survey, the correspondence email addresses were no longer active, highlighting how rapidly access to long-term primary data can be passively lost. It is unsurprising, then, that 95% of scientists in evolution and ecology are reportedly in favour of PDA [1]. Yet, having highlighted the value and irreplaceability of data describing long-term population studies, Mills et al. [2] reject PDA in favour of allowing PIs to maintain postpublication control of primary data, going so far as to discuss the possibility of data being copyrighted. Such an attitude risks inviting public ire, since asserting private ownership ignores the public funding that likely enabled data collection, and is at odds with a Royal Society report urging scientists to "shift away from a research culture where data is viewed as a private preserve" [55]. I contend that primary data would better be considered as an intrinsic component of a published article, alongside the report appearing in the pages of a journal that presents the data''s interpretation. In this way, an article would move closer to being a self-contained product of research that is fully accessible and assessable. For issues that can only be addressed using data covering an extended time span [2,3739], excusing long-term studies from the expectation of publishing primary data would potentially render the PIs as unaccountable gatekeepers of scientific consensus. PDA encourages an alternative to this and facilitates a change in the treatment of published studies, from the system of preservation (in which a study''s contribution is fixed) that has been the historical convention, towards a conservation approach (in which support for hypotheses can be reassessed and updated) [56]. Given the fundamentally dynamic nature of science, harnessing the storage potential enabled by the Information Age to ensure a study''s contribution can be further developed or refined in the future seems logical and would benefit both the individual authors (through enhanced citations and reputation) and the wider scientific community.The comparison Mills et al. [2] draw between PIs and pharmaceutical companies in terms of how their data are treated is inappropriate: whereas the latter bear the financial cost of developing a drug, a field study''s costs are typically covered by the public purse, such that the personal risks of a failed project are largely limited to opportunity costs. It is inconsistent to highlight funding challenges [2,37] while simultaneously acting to inhibit maximum value for money being derived from funded studies. Several of the studies represented in the survey by Mills et al. [2] comfortably exceed a 50-year time span, highlighting the possibility that current PIs are inheritors rather than initiators of long-term studies. In such a situation, arguments favouring the rights of the PI to maintain control of postpublication access to primary data are weakened still further, given that the data may be the result of someone else''s efforts. Indeed, given the undoubted value of long-term studies for ecological and evolutionary research [2,37,39], many of Mills et al.''s [2] survey respondents will presumably hope to see these studies continue after their own retirement. Rather than owners of datasets, then, perhaps PIs of long-term studies might better be considered as custodians, such that—to adapt the slogan of a Swiss watchmaker—“you never really own a long-term population study; you merely look after it for the next generation.”  相似文献   

9.
Examples of ecological specialization abound in nature but the evolutionary and genetic causes of tradeoffs across environments are typically unknown. Natural selection itself may favor traits that improve fitness in one environment but reduce fitness elsewhere. Furthermore, an absence of selection on unused traits renders them susceptible to mutational erosion by genetic drift. Experimental evolution of microbial populations allows these potentially concurrent dynamics to be evaluated directly, rather than by historical inference. The 50,000 generation (and counting) Lenski Long-Term Evolution Experiment (LTEE), in which replicate E. coli populations have been passaged in a simple environment with only glucose for carbon and energy, has inspired multiple studies of their potential specialization. Earlier in this experiment, most changes were the side effects of selection, both broadening growth potential in some conditions and narrowing it in others, particularly in assays of diet breadth and thermotolerance. The fact that replicate populations experienced similar losses suggested they were becoming specialists because of tradeoffs imposed by selection. However a new study in this issue of PLOS Biology by Nicholas Leiby and Christopher Marx revisits these lines with powerful new growth assays and finds a surprising number of functional gains as well as losses, the latter of which were enriched in populations that had evolved higher mutation rates. Thus, these populations are steadily becoming glucose specialists by the relentless pressure of mutation accumulation, which has taken 25 years to detect. More surprising, the unpredictability of functional changes suggests that we still have much to learn about how the best-studied bacterium adapts to grow on the best-studied sugar.The wonder of biological diversity belies a puzzling subtext. Species are defined as much by their limits as their capabilities. Very few species in our common vernacular tolerate life in a wide range of environments, and those that do—the Norway rat, say—are not generally appealing. More often, we celebrate specialization to a particular condition: for example, orchid epiphytes growing tenuously in the cloud forest, only a subtle climate shift from extinction. Even grade school natural history teaches us that species are often unfit when living beyond their natural range.So it comes as a surprise that the causes of this rampant ecological specialization are poorly understood. “Use it or lose it,” but why? One common explanation is that natural selection tends to favor traits that simultaneously enhance fitness in one environment but compromise fitness elsewhere. This selective process is known as “antagonistic pleiotropy.” Another explanation is that a selective shadow falls upon unused traits, rendering them susceptible to mutational erosion by random genetic drift. This neutral process is known as “mutation accumulation” (Figure 1). These processes inevitably co-occur, and can be enhanced by the hybrid dynamic of genetic hitchhiking, in which neutral mutations affecting unused functions become linked to different mutations under positive selection. In most cases, the functional decay of a species can only be studied retrospectively, and distinguishing the roles of antagonistic pleiotropy and mutation accumulation is hampered by weak historical inference. Did selection, or an absence of selection, produce the blind cavefish [1]? There is little controversy that the sum of these dynamics can produce specialists, but their timing and relative importance is an open question.Open in a separate windowFigure 1Hypothetical dynamics of fitness in foreign environments by pleiotropy or mutation accumulation during long-term adaptation.Prolonged adaptation to one environment leads to decelerating fitness gains in the selective environment (solid black line), as beneficial mutations become limiting. Consequences of this adaptation for fitness in other environments may take different forms. No net change may occur if beneficial mutations generate no or inconsistent side effects (neutrality). However, the same mutations responsible for adaptation may also increase fitness in other environments (synergistic pleiotropy, dotted line), may decrease fitness in foreign environments at an equivalent rate if antagonistic effects correlate with selected effects (antagonistic pleiotropy, dotted line), or may decrease fitness at an increasing rate if subsequent mutations generate greater tradeoffs (antagonistic pleiotropy, dashed and dotted line). The uncertainty of the form of pleiotropic effects reflects a general lack of understanding of how mutations interact to affect fitness, particularly over the long term. Mutation accumulation (MA) in traits hidden from selection is expected to reduce fitness randomly but linearly on average, more slowly during evolution at a low mutation rate (MA, low U) or more rapidly at a high mutation rate (MA, high U). Evidence of all processes is now evident in this latest study of the evolution of diet breadth in the LTEE [20].The study of “evolution in action” using model experimental populations of rapidly reproducing organisms allows researchers to quantify both adaptation and any functional declines simultaneously. This approach is especially powerful when samples of evolving populations can be stored inanimate and studied at a later time under various conditions. Perhaps the best example of this approach is Richard Lenski''s Long-Term Evolution Experiment (LTEE), in which 12 populations of E. coli have been grown under simple conditions for more than 25 years and 50,000 generations [2],[3].When as a graduate student I wondered aloud whether the LTEE lines had become specialists, a colleague remarked: “Of course! You''ve selected for streamlined E. coli that have scuttled unused functions.” But with only a small amount of glucose as the sole carbon source available to the ancestor (the innovation by one population of using citrate for growth more than 30,000 generations in the future notwithstanding [4]), all anabolic pathways to construct new cells remain under strong selection to preserve their function. Moreover, because some catabolic reactions use the same intermediates as anabolic pathways (a form of pleiotropy) [5], growth on alternative carbon sources may be nonetheless preserved. Thus, we wondered whether the physiology of E. coli might actually prove to be robust during long-term evolution on glucose alone.Over the first 2,000 generations, the LTEE lines gained more often than lost fitness across a range of different environments [6]. In addition, a high-throughput screen of cellular respiration (Biolog) for the best-studied clone from these lines showed 171 relative gains and only 32 losses [7]. Even these losses in substrate respiration did not translate to reduced fitness versus the ancestor; rather, the evolved clone was simply relatively worse in the foreign resources than in glucose [7]. Evidently, each of the five beneficial mutations found in this early clone was broadly beneficial and imparted few tradeoffs [8]. Generalists rather than specialists were the rule.Between 2,000 and 20,000 generations, fitness losses in foreign conditions became more obvious but not always consistent. Some lines became less fit than the ancestor in a dilute complex medium (LB) [9], all lines grew worse at high (>40°C) and low (<20°C) temperature [10], and all lines became sensitive to the resource concentration in their environment, even for glucose [9]. Did subsequent beneficial mutations cause these tradeoffs (antagonistic pleiotropy), or did other, neutral or slightly harmful mutations accumulate by drift (Figure 1)? We must consider the population genetic dynamics of these LTEE populations. The hallmark of neutral theory [11] is that mutations with no effect in the selective environment should become fixed in the population at the rate of mutation. For the ancestor of this experiment, the mutation rate is ∼10−3 per genome per generation [12],[13], so only a handful of neutral mutations would have fixed by the time tradeoffs became evident, and would not likely explain the early specialization.However, an important extension of neutral theory is that slightly harmful mutations—those whose effects are roughly the inverse of the population size or below, 1/N—can also be fixed by drift [14]. Millions of slightly deleterious mutations were produced in these populations, which cycled between 5×106 and 5×108 cells each day. Might these mutations account for tradeoffs over the first 10–20,000 generations? In small populations, the effect of these mutations can be substantial, which explains why bottlenecked populations may experience fitness declines or even the genome erosion frequently seen in bacterial endosymbionts [15]. But in the large LTEE populations, most deleterious mutations are weeded out by selection and only those with the slightest effects may accumulate over very long time scales. Thus, because these early losses tended to occur when adaptation in the selective environment was most rapid, and because the randomness and rarity of mutation accumulation should not produce parallel changes over these time scales, early specialization is best explained by antagonistic pleiotropy [9],[10].Later in the LTEE, elevated mutation rates began to evolve in certain lines, resulting in a fundamental change in the population genetic environment [16],[17] that should increase the rate of functional decay in unused, essentially neutral functions. These mutator populations tended to perform worse in multiple environments, and in theory should continue to specialize more rapidly by accelerated mutation accumulation. As a first test, we used Biolog plates to assay respiration on 95 different carbon sources over the first 20,000 generations [18]. Although mutators tended to exhibit a reduced breadth of function in this assay, the difference was not statistically significant [18]. Rather, a surprising number of losses of function were shared among replicate lines, and we took this parallelism as further support of antagonistic pleiotropy driven by selection for common sets of adaptive mutations.Here the LTEE offers its greatest advantage: more time, both for evolution and innovative research. Over subsequent generations, mutator lines should continue to accumulate greater mutational load by drift and hence become more specialized than lines retaining the low ancestral rate. Genomic sequences of the evolved lines now have confirmed this increased mutational load [3],[19] in the six of 12 lines that are now mutators [16]. In this issue, Leiby and Marx [20] have readdressed these questions by retracing old steps, applying the prior Biolog assays to lines spanning 50,000 generations of evolution, and by pioneering new high-throughput assays of fitness in many resources. Somewhat surprisingly, these methods disagree and challenge the reliability of Biolog data as a fitness proxy. As a proprietary measure of cellular respiration, it can demonstrate major functional shifts but is less reliable than growth rate as a fitness parameter.More importantly, Leiby and Marx provide clear evidence that niche breadth in the LTEE was shaped by both mutation accumulation and pleiotropy. Growth rates actually increased on several resources, and hence the pleiotropic effects of adaptation to glucose were synergistic, broadening functionality particularly over the first 20,000 generations, as well as antagonistic, producing fewer tradeoffs than previously thought [20]. Pleiotropic effects were also somewhat unpredictable: a sophisticated flux-balance analysis [21] of foreign substrates did not reveal more gains for resources similar to glucose or losses for dissimilar resources. Some early losses linked to selection (maltose, galactose, serine) [6] became complete, but also subtle gains of function for dicarboxylic acid metabolism, perhaps related to growth on metabolic byproducts, became amplified. The most striking pattern was that mutator populations became specialists, diminished for many functions owing to their greater mutational burden, and this only became evident after 50,000 generations in a single resource. These convergent functional losses were not caused by selection, as is often argued, but rather by an absence of selection in the face of mutational pressure. Mutational decay by genetic drift takes a long time, and it will take much longer for the non-mutator lines, it seems.Although Leiby and Marx [20] correctly emphasize the importance of truly long-term selection combined with deficient DNA repair to reveal effects of mutation accumulation, decay has been witnessed in other systems undergoing regular population bottlenecks over shorter time scales [22],[23]. Antagonistic pleiotropy can also reveal its effects much more rapidly than was seen in the LTEE, especially when selection discriminates among discrete fitness features in a heterogeneous environment, such as in the colonization of a new landscape [24],[25]. What this study uniquely illustrates is the unpredictability of pleiotropic effects of adaptation to a simple environment, which in turn shows how chance draws from a distribution of contending beneficial mutations may produce divergent outcomes, ranging from generalists to specialists. A sample of the first mutants competing to prevail in the LTEE system showed variable niche breadth [26] so perhaps we should not be surprised that the footprints of these large-effect mutations endure. Further study of the precise mechanisms by which different mutations produce more fit offspring will teach us more about the origins of diversity that beguile us. We can also gain a broader perspective on the longstanding tension between chance and necessity [27]—a motivator of the LTEE—by focusing more on what is unnecessary, such as how organisms grow in foreign environments. Often insight comes from studying at the margins of a problem, and here, the limits to the growth of these bacteria have allowed us to focus more on how exactly they have accomplished their most essential tasks.  相似文献   

10.
11.
Many organisms harbor microbial associates that have profound impacts on host traits. The phenotypic effect of symbionts on their hosts may include changes in development, reproduction, longevity, and defense against natural enemies. Determining the consequences of associating with a microbial symbiont requires experimental comparison of hosts with and without symbionts. Then, determining the mechanism by which symbionts alter these phenotypes can involve genomic, genetic, and evolutionary approaches; however, many host-associated symbionts are not amenable to genetic approaches that require cultivation of the microbe outside the host. In the current issue of PLOS Biology, Chrostek and Teixeira highlight an elegant approach to studying functional mechanisms of symbiont-conferred traits. They used directed experimental evolution to select for strains of Wolbachia wMelPop (a bacterial symbiont of fruit flies) that differed in copy number of a region of the genome suspected to underlie virulence. Copy number evolved rapidly when under selection, and wMelPop strains with more copies of the region shortened the lives of their Drosophila hosts more than symbionts with fewer copies. Interestingly, the wMelPop strains with more copies also increase host resistance to viruses compared to symbionts with fewer copies. Their study highlights the power of exploiting alternative approaches when elucidating the functional impacts of symbiotic associations.Symbioses, long-term and physically close interactions between two or more species, are central to the ecology and evolution of many organisms. Though “Symbiosis” is more often used to define interactions that are presumed to be mutually beneficial to a host and its microbial partner, a broader definition including both parasitic and mutualistic interactions recognizes that the fitness effects of many symbioses are complex and often context dependent. Whether an association is beneficial can depend on ecological conditions, and mutation and other evolutionary processes can result in symbiont strains that differ in terms of costs and benefits to hosts (Fig. 1).Open in a separate windowFig 1The symbiosis spectrum.The costs and benefits of symbiosis for hosts are not bimodal but span a continuum. The benefit to cost ratio is mediated both by environmental conditions and by the strain of symbiont. For example, the bacteria Hamiltonella defensa increases aphid resistance to parasitoid wasps. When Hamiltonella loses an associated bacteriophage, protection is lost. Also, in aphids, Buchnera aphidicola is a bacterial symbiont that provisions its hosts with critical nutritional resources. However, alterations of the heat shock promoter in Buchnera lessen the fitness benefit of symbiosis for the hosts under elevated temperatures. Amplification of a region of the Wolbachia genome known as Octomom causes the bacteria to shorten the lifespan of its Drosophila fly hosts.Elucidating the effects of host-associated microbes includes, when possible, experiments designed to assay host phenotypes when they do and do not have a particular symbiont of interest (Fig. 2). In systems in which hosts acquire symbionts from the environment, hosts can be reared in sterile conditions to prevent acquisition [1]. If symbionts are passed internally from mother to offspring, antibiotic treatments can sometimes be utilized to obtain lineages of hosts without symbionts [2]. The impacts of symbiont presence on survival, development, reproduction, and defense can be quantified, with the caveat that these impacts may be quite different under alternative environmental conditions. While such experiments are sometimes more tractable in systems with simple microbial consortia, the same experimental processes can be utilized in systems with more complex microbial communities [3,4].Open in a separate windowFig 2Approaches to functionally characterize symbiont effects.The first step in functionally characterizing the phenotypic impacts of a symbiont on its host is to measure phenotypes of hosts with and without symbionts. Any effects need to be considered in the light of how they are modified by environmental conditions. Understanding the mechanisms underlying symbiont alteration of host phenotype can involve, and often combines, genomic, genetic, and evolutionary approaches. Solid arrows indicate the path leading to results highlighted in Chrostek and Teixeira’s investigation of Wolbachia virulence in this issue of PLoS Biology.Once a fitness effect of symbiosis is ascertained, determining the mechanistic basis of this effect can be challenging. A genomics approach sometimes provides informative insight into microbial function. Sequencing of many insect-associated symbionts, for example, has confirmed the presence of genes necessary for amino acid and vitamin synthesis [58]. These genomic revelations, in some cases, can be linked to phenotypic effects of symbiosis for the hosts. For example, aphids reared in the absence of their obligate symbiotic bacteria, Buchnera aphidicola, can survive when provisioned with supplemental amino acids but cannot survive without supplementation, suggesting that Buchnera’s provisioning of amino acids is critical for host survival [9,10]. The Buchnera genome contains many of the genes necessary for amino acid synthesis [5].Linking genotype to phenotype, however, can be complicated. Experiments are necessary to functionally test the insights garnered from genome sequencing. For example, just because a symbiont has genes necessary for synthesis of a particular nutrient does not mean that the nutrient is being provisioned to its host. Furthermore, in many systems we do not know what genetic mechanisms are most likely to influence a symbiont-conferred phenotype. For example, if hosts associated with a given microbe have lower fitness than those without the microbe, what mechanism mediates this phenotype? Is it producing a toxin? Is it using too many host resources? In these cases, a single genome provides even less insight.Comparative genomics can be another approach. This requires collection of hosts with alternative symbiont strains and then testing these strains in a common host background to demonstrate that they have different phenotypic effects. Symbiont genomes can then be sequenced and compared to identify differences. This approach was utilized to compare genomes of strains of the aphid bacterial symbiont Regiella insecticola that confer different levels of resistance to parasitoid wasps [11]; the protective and nonprotective Regiella genome differed in many respects. Comparing the genomes of Wolbachia strains with differential impacts on fly host fitness [12,13] revealed fewer differences, though none involved a gene with a function known to impact host fitness. Comparative genomics rarely uncovers a holy grail as the genomes of symbiont strains with alternative phenotypic effects rarely differ at a single locus of known function.Another approach, which is at the heart of studies of microbial pathogens, is to use genetic tools to manipulate symbionts at candidate loci (or randomly through mutagenesis) and compare the phenotypic effects of genetically-manipulated and unmanipulated symbionts. Indeed, this approach has provided insights into genes underlying traits of both pathogenic [14] and beneficial [15,16] microbes. There is one challenge. Many host-associated symbionts are not cultivable outside of their hosts, which precludes utilization of most traditional genetic techniques used to modify microbial genomes.An alternative approach to studying symbiont function leverages evolution. Occasionally, lineages that once conferred some phenotypic effect, when tested later, no longer do. If symbiont samples were saved along the way, researchers can then determine what in the genome changed. For example, pea aphids (Acyrthosiphon pisum) harboring the bacteria Hamiltonella defensa are more resistant to parasitoid wasps than those without the bacteria [17,18]. Toxin-encoding genes identified in the genome of a Hamiltonella-associated bacteriophage were hypothesized to be central to this defense [18,19]. However, confirmation of the bacteriophage’s role required comparing the insects’ resistance to wasps when they harbored the same Hamiltonella with and without the phage. No Hamiltonella isolates were found in nature without the phage, but bottleneck passaging of the insects and symbionts generation after generation in the laboratory led to the loss of phage in multiple host lineages. Experimental assays confirmed that in the absence of phage, there was no protection [20]. Similarly, laboratory passaging of aphids and symbionts serendipitously led to spread of a mutation in the genome of Buchnera aphidicola, the primary, amino acid-synthesizing symbiont of pea aphids. The mutation, a single nucleotide deletion in the promoter for ibpA, a gene encoding for a heat-shock protein, lowers aphid fitness under elevated temperature conditions [21]. The mutation is found at low levels in natural aphid populations, suggesting that laboratory conditions facilitate maintenance of the genotype.In the above cases, evolution was a fortunate coincidence. In this issue of PLoS Biology, Chrostek and Teixeira (2014) illustrate another alternative, directed experimental evolution. Previous work demonstrated that a strain of the symbiotic bacterium Wolbachia, wMelPop, is virulent to its Drosophila melanogaster hosts, considerably shortening lifespan while overproliferating inside the flies [22]. To investigate the mechanism of virulence, researchers compared the genomic content of an avirulent Wolbachia strain to that of the virulent wMelPop [12,13]. These comparisons revealed that the wMelPop genome contains a region with eight genes that is amplified multiple times; in avirulent strains there is only a single copy. This eight gene region was nicknamed “Octomom.” To functionally test whether Octomom mediates Wolbachia virulence, over successive generations, Chrostek and Teixeira selected for females with either high or low Octomom copy numbers to start the next generations. They found that copy number could evolve rapidly and was correlated with virulence. Flies harboring wMelPop with more copies of Octomom had shorter lifespans. This cost was reversed in the presence of natural enemies; flies harboring wMelPop with more copies of Octomom had higher resistance to viral pathogens. Thus, selection provided a functional link between genotype and phenotype in a symbiont recalcitrant to traditional microbial genetics approaches.In many respects, this is similar to the research on aphids and their symbionts, where protective phenotypes were lost through passaging of aphids and symbionts generation after generation, as part of standard laboratory maintenance. Chrostek and Teixeira simply used the tools of experimental evolution to select for altered symbionts in a controlled fashion. Comparison of the studies also highlights two potential approaches—select for a phenotype and determine the genotypic change, or select for a genotype of interest and determine the phenotypic effect.Why do we need to know the genetic mechanisms underlying symbiont-conferred traits? In terms of evolutionary dynamics, the maintenance of a symbiont’s effect in a population is predicated on the likelihood of it being maintained in the presence of mutation, drift, and selection. Symbiosis research often considers how ecological conditions influence symbiont-conferred traits but less often considers the instability of those influences due to evolutionary change. From the perspective of applied applications to human concerns, symbiont alteration of insect phenotypes are potential mechanisms to reduce vectoring of human and agricultural pathogens, either through directly reducing insect fitness or reducing the capacity of vectors to serve as pathogen reservoirs [2328]. Short term field trials, for example, have demonstrated spread and persistence of Wolbachia in mosquito populations [29,30]. Because Wolbachia reduce persistence of viruses, including human pathogens, in insects [26,3133], this is a promising pesticide-free and drug-free control strategy for insect-vectored diseases. Can we assume that Wolbachia and other symbionts will always confer the same phenotypes to their hosts? If the conferred phenotype is based on a region of the genome where mutation is likely (e.g., the homopolymeric track within the heat shock promoter of aphid Buchnera, the Octomom region in Drosophila wMelPop), then we have clear reason to suspect that the genotypic and phenotypic makeup of the symbiont population could change over time. We need to investigate how populations of bacterial symbionts evolve in host populations under natural ecological conditions, carefully screening for both changes in phenotype and changes in genotype over the course of such experimental observations. We then need to incorporate evolutionary changes when modeling symbiont maintenance and when considering the use of symbionts in applied applications.  相似文献   

12.
13.
Multicellular eukaryotes can perform functions that exceed the possibilities of an individual cell. These functions emerge through interactions between differentiated cells that are precisely arranged in space. Bacteria also form multicellular collectives that consist of differentiated but genetically identical cells. How does the functionality of these collectives depend on the spatial arrangement of the differentiated bacteria? In a previous issue of PLOS Biology, van Gestel and colleagues reported an elegant example of how the spatial arrangement of differentiated cells gives rise to collective behavior in Bacillus subtilus colonies, further demonstrating the similarity of bacterial collectives to higher multicellular organisms.Introductory textbooks tend to depict bacteria as rather primitive and simple life forms: the billions of cells in a population are all supposedly performing the exact same processes, independent of each other. According to this perspective, the properties of the population are thus nothing more than the sum of the properties of the individual cells. A brief look at the recent literature shows that life at the micro scale is much more complex and far more interesting. Even though cells in a population share the same genetic material and are exposed to similar environmental signals, they are individuals: they can greatly differ from each other in their properties and behaviors [1,2].One source of such phenotypic variation is that individual cells experience different microenvironments and regulate their genes in response. However, and intriguingly, phenotypic differences can also arise in the absence of environmental variation [3]. The stochastic nature of biochemical reactions makes variation between individuals unavoidable: reaction rates in cells will fluctuate because of the typical small number of the molecules involved, leading to slight differences in the molecular composition between individual cells [4]. While cells cannot prevent fluctuations from occurring, the effect of these extracellular and intracellular perturbations on a cell’s phenotype can be controlled by changing the biochemical properties of molecules or the architecture of gene regulatory networks [4,5]. The degree of phenotypic variation could thus evolve in response to natural selection. This raises the question of whether the high degree of phenotypic variation observed in some traits could offer benefits to the bacteria [5].One potential benefit of phenotypic variation is bet hedging. Bet hedging refers to a situation in which a fraction of the cells express alternative programs, which typically reduce growth in the current conditions but at the same time allow for increased growth or survival when the environment abruptly changes [68]. Another potential benefit can arise through the division of labor: phenotypic variation can lead to the formation of interacting subpopulations that specialize in complementary tasks [9]. As a result, the population as a whole can perform existing functions more efficiently or attain new functionality [10]. Division of labor enables groups of bacteria to engage in two tasks that are incompatible with each other but that are both required to attain a certain biological function.One of the most famous examples of division of labor in bacteria is the specialization of multicellular cyanobacteria into photosynthesizing and nitrogen-fixing subpopulations [11]. Here, the driving force behind the division of labor is the biochemical incompatibility between photosynthesis and nitrogen fixation, as the oxygen produced during photosynthesis permanently damages the enzymes involved in nitrogen fixation [12]. Other examples include the division of labor between two subpopulations of Salmonella Typhimurium (Tm) during infections [9] and the formation of multicellular fruiting bodies in Myxococcus xanthus [13]. Division of labor is not restricted to interactions between only two subpopulations; for example, the soil-dwelling bacteria Bacillus subtilis can differentiate into at least five different cell types [14]. Multiple types can simultaneously be present in Bacillus biofilms and colonies, each contributing different essential tasks [14,15].An important question is whether a successful division of labor requires the different subpopulations to coordinate their behavior and spatial arrangement. For some systems, it turns out that spatial coordination is not required. For example, the division of labor in clonal groups of Salmonella Tm does not require that the two cell types are spatially arranged in a particular way [9]. In other systems, spatial coordination between the different cell types seems to be beneficial. For example, differentiation into nitrogen-fixing and photosynthetic cells in multicellular cyanobacteria is spatially regulated in a way that facilitates sharing of nitrogen and carbon [16]. In general, when cell differentiation is combined with coordination of behavior between cells, this can allow for the development of complex, group-level behaviors that cannot easily be deduced from the behavior of individual cells [1720]. In these cases, a population can no longer be treated as an assembly of independent individuals but must be seen as a union that together shows collective behavior.The study by van Gestel et al. [21] in a previous issue of PLOS Biology offers an exciting perspective on how collective behavior can arise from processes operating at the level of single cells. Van Gestel and colleagues [21] analyzed how groups of B. subtilis cells migrate across solid surfaces in a process known as sliding motility. The authors found that migration requires both individuality—the expression of different phenotypes in clonal populations—and spatial coordination between cells. Migration depends critically on the presence of two cell types: surfactin-producing cells, which excrete a surfactant that reduce surface tension, and matrix-producing cells, which excrete extracellular polysaccharides and proteins that form a connective extracellular matrix (Fig 1B) [14]. These two cell types are not randomly distributed across the bacterial group but are rather spatially organized (Fig 1C). The matrix-producing cells form bundles of interconnected and highly aligned cells, which the authors refer to as “van Gogh” bundles. The surfactant producing cells are not present in the van Gogh bundle but are essential for the formation of the bundles [21].Open in a separate windowFig 1Collective behavior through the spatial organization of differentiated cells.(A) Initially cells form a homogenous population. (B) Differentiation: cells start to differentiate into surfactin- (orange) and matrix- (blue) producing cells. The two cell types perform two complementary and essential tasks, resulting in a division of labor. (C) Spatial organization: the matrix-producing cells form van Gogh bundles, consisting of highly aligned and interconnected cells. Surfactin-producing cells are excluded from the bundles and have no particular spatial arrangement. (D) Collective behavior: growth of cells in the van Gogh bundles leads to buckling of these bundles, resulting in colony expansion. The buckling and resulting expansion depend critically on the presence of the two cell types and on their spatial arrangement.The ability to migrate is a collective behavior that can be linked to the biophysical properties of the multicellular van Gogh bundles. The growth of cells in these bundles causes them to buckle, which in turn drives colony migration (Fig 1D) [21]. This is a clear example of an emergent (group-level) phenotype: the buckling of the van Gogh bundles and the resulting colony motility cannot easily be deduced from properties of individual cells. Rather, to understand colony migration we have to understand the interactions between the two cells type as well as their spatial organization. Building on a rich body of work on the regulation of gene expression and cellular differentiation in Bacillus [14], van Gestel et al. [21] are able to show how these molecular mechanisms lead to the formation of specialized cell types that, through coordinated spatial arrangement, provide the group the ability to move (Fig 1). The study thus uniquely bridges the gap between molecular mechanisms and collective behavior in bacterial multicellularity.This study raises a number of intriguing questions. A first question pertains to the molecular mechanisms underlying the spatial coordination of the two cell types. Can the spatial organization be explained based on known mechanisms of the regulation of gene expression in this organism or does the formation of these patterns depend on hitherto uncharacterized gene regulation based on spatial gradients or cell–cell interaction? A second question is about the selective forces that lead to the evolution of collective migration of this organism. The authors raise the interesting hypothesis that van Gogh bundles evolved to allow for migration. Although this explanation is very plausible, it also raises the question of how selection acting on a property at the level of the group can lead to adaptation at the individual cell level. Possible mechanisms for such selective processes have been described within the framework of multilevel selection theory. However, there are still many questions regarding how, and to what extent, multilevel selection operates in the natural world [2224]. The system described by van Gestel and colleagues [21] offers exciting opportunities to address these questions using a highly studied and experimentally amenable model organism.Bacterial collectives (e.g., colonies or biofilms) have been likened to multicellular organisms partly because of the presence of cell differentiation and the importance of an extracellular matrix [25,26]. Higher multicellular organisms share these properties; however, they are more than simple lumps of interconnected, differentiated cells. Rather, the functioning of multicellular organisms critically depends on the precise spatial organization of these cells [27]. Even though spatial organization has been suggested before in B. subtilis biofilms [28], there was a gap in our understanding of how spatial organization of unicellular cells can lead to group-level function. The van Gogh bundles in the article by van Gestel et al. [21] provide direct evidence on how differentiated cells can spatially organize themselves to give rise to group-level behavior. This shows once more that bacteria are not primitive “bags of chemicals” but rather are more like us “multicellulars” than we might have expected.  相似文献   

14.
From bacteria to multicellular animals, most organisms exhibit declines in survivorship or reproductive performance with increasing age (“senescence”) [1],[2]. Evidence for senescence in clonal plants, however, is scant [3],[4]. During asexual growth, we expect that somatic mutations, which negatively impact sexual fitness, should accumulate and contribute to senescence, especially among long-lived clonal plants [5],[6]. We tested whether older clones of Populus tremuloides (trembling aspen) from natural stands in British Columbia exhibited significantly reduced reproductive performance. Coupling molecular-based estimates of clone age with male fertility data, we observed a significant decline in the average number of viable pollen grains per catkin per ramet with increasing clone age in trembling aspen. We found that mutations reduced relative male fertility in clonal aspen populations by about 5.8×10−5 to 1.6×10−3 per year, leading to an 8% reduction in the number of viable pollen grains, on average, among the clones studied. The probability that an aspen lineage ultimately goes extinct rises as its male sexual fitness declines, suggesting that even long-lived clonal organisms are vulnerable to senescence.  相似文献   

15.
Humans help each other. This fundamental feature of homo sapiens has been one of the most powerful forces sculpting the advent of modern civilizations. But what determines whether humans choose to help one another? Across 3 replicating studies, here, we demonstrate that sleep loss represents one previously unrecognized factor dictating whether humans choose to help each other, observed at 3 different scales (within individuals, across individuals, and across societies). First, at an individual level, 1 night of sleep loss triggers the withdrawal of help from one individual to another. Moreover, fMRI findings revealed that the withdrawal of human helping is associated with deactivation of key nodes within the social cognition brain network that facilitates prosociality. Second, at a group level, ecological night-to-night reductions in sleep across several nights predict corresponding next-day reductions in the choice to help others during day-to-day interactions. Third, at a large-scale national level, we demonstrate that 1 h of lost sleep opportunity, inflicted by the transition to Daylight Saving Time, reduces real-world altruistic helping through the act of donation giving, established through the analysis of over 3 million charitable donations. Therefore, inadequate sleep represents a significant influential force determining whether humans choose to help one another, observable across micro- and macroscopic levels of civilized interaction. The implications of this effect may be non-trivial when considering the essentiality of human helping in the maintenance of cooperative, civil society, combined with the reported decline in sufficient sleep in many first-world nations.

Helping behavior between humans has been one of the most influential forces sculpting modern civilizations, but what factors influence this propensity to help? This study demonstrates that a lack of sleep dictates whether humans choose to help each other at three different scales: within individuals, across individuals, and across societies.

Service to others is the rent you pay for your room here on earth.”― Muhammad Ali
Humans help each other. Helping is a prominent feature of homo sapiens [1], and represents a fundamental force sculpting the advent and preservation of modern civilizations [2,3].The ubiquity of helping is evident across the full spectrum of societal strata. From global government-to-government aid packages (e.g., the international aid following the 2004 Indian Ocean tsunami [4]), to country-wide pledge drives (e.g., the 2010 Haiti disaster) [5], and to individuals altruistically gifting money or donating their own blood to strangers, the expression of helping is abundant and pervasive [6]. So much so that this fundamental act has scaled into a lucent and sizable “helping economy” [7], with charitable giving in the United States amounting to $450 billion in 2019; a value representing 5.5% of the gross domestic product. In the United Kingdom, 10 billion pounds were donated to charity in 2017 and 2018. Indeed, more than 50% of individuals across the US, Europe, and Asia will have reported donating to charity or helping a stranger within the past month (The World Giving index).Human helping is therefore globally abundant, common across diverse societies, sizable in scope, substantive in financial magnitude, consequential in ramification, and frequent in occurrence.The motivated drive for humans to help each other has been linked to a range of underlying factors, from evolutionary forces (e.g., kin selection and reciprocal altruism that bias helping toward close others [2]), cultural norms and expectations (e.g., individualistic versus collectivistic cultures [8,9]), to socioeconomic factors (e.g., helping is less common in larger cities relative to rural areas [10,11]), as well as personality traits (e.g., individual empathy) [12,13].Ultimately, however, the decisional act to help others involves the human brain. Prosocial helping of varied kinds consistently engages a set of brain regions known as the social cognition network. Comprised of the medial prefrontal cortex (mPFC), mid and superior temporal sulcus, temporal-parietal junction (TPJ), and the precuneus [14,15], this network is activated when considering the mental states, needs, and perspectives of others [1619], and the active choice to help them [2023]. In contrast, lesions within key regions of this network result in “acquired sociopathy” [24], associated with a loss of both empathy and the withdrawal of compassionate helping [2527].Yet the possibility that sleep loss represents another significant factor determining whether or not humans help each other, linked to underlying impairments within the social cognition brain network, remains unknown. Several lines of evidence motivate this prediction. First, insufficient sleep impairs emotional processing, including deficits in emotion recognition and expression, while conversely increasing basic emotional reactivity, further linked to antisocial behavior [28,29] (such as increased interpersonal conflict [30] and reduced trust in others [31,32]). Second, sleep loss reliably decreases activity in, and disrupts functional connectivity between, numerous regions within the social cognition brain network [33], including the mPFC [34], TPJ, and precuneus [35].Building on this overarching hypothesis, here, we test the prediction that a lack of sleep impairs human helping at a neural, individual, group, and global societal level. More specifically, we tested whether: (i) within individuals, a night of experimental sleep loss decreases the fundamental desire to help others, the underlying neural mechanism of which is linked to impaired activity within the social cognition brain network when considering other individuals (Study 1), (ii) in a micro-longitudinal study, night-to-night fluctuations in sleep result in a corresponding next-day deficit in the desire to act altruistically and helping others (Study 2), and (iii) at a large-scale national level, the loss of 1 h of sleep opportunity, using the manipulation of daylight saving time (DST), impairs the real-world behavioral act of altruistic human helping at a large-scale, societal level (Study 3).  相似文献   

16.
The hippocampus has unique access to neuronal activity across all of the neocortex. Yet an unanswered question is how the transfer of information between these structures is gated. One hypothesis involves temporal-locking of activity in the neocortex with that in the hippocampus. New data from the Matthew E. Diamond laboratory shows that the rhythmic neuronal activity that accompanies vibrissa-based sensation, in rats, transiently locks to ongoing hippocampal θ-rhythmic activity during the sensory-gathering epoch of a discrimination task. This result complements past studies on the locking of sniffing and the θ-rhythm as well as the relation of sniffing and whisking. An overarching possibility is that the preBötzinger inspiration oscillator, which paces whisking, can selectively lock with the θ-rhythm to traffic sensorimotor information between the rat’s neocortex and hippocampus.The hippocampus lies along the margins of the cortical mantle and has unique access to neuronal activity across all of the neocortex. From a functional perspective, the hippocampus forms the apex of neuronal processing in mammals and is a key element in the short-term working memory, where neuronal signals persist for tens of seconds, that is independent of the frontal cortex (reviewed in [1,2]). Sensory information from multiple modalities is highly transformed as it passes from primary and higher-order sensory areas to the hippocampus. Several anatomically defined regions that lie within the temporal lobe take part in this transformation, all of which involve circuits with extensive recurrent feedback connections (reviewed in [3]) (Fig 1). This circuit motif is reminiscent of the pattern of connectivity within models of associative neuronal networks, whose dynamics lead to the clustering of neuronal inputs to form a reduced set of abstract representations [4] (reviewed in [5]). The first way station in the temporal lobe contains the postrhinal and perirhinal cortices, followed by the medial and lateral entorhinal cortices. Of note, olfactory input—which, unlike other senses, has no spatial component to its representation—has direct input to the lateral entorhinal cortex [6]. The third structure is the hippocampus, which contains multiple substructures (Fig 1).Open in a separate windowFig 1Schematic view of the circuitry of the temporal lobe and its connections to other brain areas of relevance.Figure abstracted from published results [715]. Composite illustration by Julia Kuhl.The specific nature of signal transformation and neuronal computations within the hippocampus is largely an open issue that defines the agenda of a great many laboratories. Equally vexing is the nature of signal transformation as the output leaves the hippocampus and propagates back to regions in the neocortex (Fig 1)—including the medial prefrontal cortex, a site of sensory integration and decision-making—in order to influence perception and motor action. The current experimental data suggest that only some signals within the sensory stream propagate into and out of the hippocampus. What regulates communication with the hippocampus or, more generally, with structures within the temporal lobe? The results from studies in rats and mice suggest that the most parsimonious hypothesis, at least for rodents, involves the rhythmic nature of neuronal activity at the so-called θ-rhythm [16], a 5–10 Hz oscillation (reviewed in [17]). The origin of the rhythm is not readily localized to a single locus [10], but certainly involves input from the medial septum [17] (a member of the forebrain cholinergic system) as well as from the supramammillary nucleus [10,18] (a member of the hypothalamus). The medial septum projects broadly to targets in the hippocampus and entorhinal cortex (Fig 1) [10]. Many motor actions, such as the orofacial actions of sniffing, whisking, and licking, occur within the frequency range of the θ-rhythm [19,20]. Thus, sensory input that is modulated by rhythmic self-motion can, in principle, phase-lock with hippocampal activity at the θ-rhythm to ensure the coherent trafficking of information between the relevant neocortical regions and temporal lobe structures [2123].We now shift to the nature of orofacial sensory inputs, specifically whisking and sniffing, which are believed to dominate the world view of rodents [19]. Recent work identified a premotor nucleus in the ventral medulla, named the vibrissa region of the intermediate reticular zone, whose oscillatory output is necessary and sufficient to drive rhythmic whisking [24]. While whisking can occur independently of breathing, sniffing and whisking are synchronized in the curious and aroused animal [24,25], as the preBötzinger complex in the medulla [26]—the oscillator for inspiration—paces whisking at nominally 5–10 Hz through collateral projections [27]. Thus, for the purposes of reviewing evidence for the locking of orofacial sensory inputs to the hippocampal θ-rhythm, we confine our analysis to aroused animals that function with effectively a single sniff/whisk oscillator [28].What is the evidence for the locking of somatosensory signaling by the vibrissae to the hippocampal θ-rhythm? The first suggestion of phase locking between whisking and the θ-rhythm was based on a small sample size [29,30], which allowed for the possibility of spurious correlations. Phase locking was subsequently reexamined, using a relatively large dataset of 2 s whisking epochs, across many animals, as animals whisked in air [31]. The authors concluded that while whisking and the θ-rhythm share the same spectral band, their phases drift incoherently. Yet the possibility remained that phase locking could occur during special intervals, such as when a rat learns to discriminate an object with its vibrissae or when it performs a memory-based task. This set the stage for a further reexamination of this issue across different epochs in a rewarded task. Work from Diamond''s laboratory that is published in the current issue of PLOS Biology addresses just this point in a well-crafted experiment that involves rats trained to perform a discrimination task.Grion, Akrami, Zuo, Stella, and Diamond [32] trained rats to discriminate between two different textures with their vibrissae. The animals were rewarded if they turned to a water port on the side that was paired with a particular texture. Concurrent with this task, the investigators also recorded the local field potential in the hippocampus (from which they extracted the θ-rhythm), the position of the vibrissae (from which they extracted the evolution of phase in the whisk cycle), and the spiking of units in the vibrissa primary sensory cortex. Their first new finding is a substantial increase in the amplitude of the hippocampal field potential at the θ-rhythm frequency—approximately 10 Hz for the data of Fig 2A—during the two, approximately 0.5 s epochs when the animal approaches the textures and whisks against it. There is significant phase locking between whisking and the hippocampal θ-rhythm during both of these epochs (Fig 2B), as compared to a null hypothesis of whisking while the animal whisked in air outside the discrimination zone. Unfortunately, the coherence between whisking and the hippocampal θ-rhythm could not be ascertained during the decision, i.e., turn and reward epochs. Nonetheless, these data show that the coherence between whisking and the hippocampal θ-rhythm is closely aligned to epochs of active information gathering.Open in a separate windowFig 2Summary of findings on the θ-rhythm in a rat during a texture discrimination task, derived from reference [32]. (A) Spectrogram showing the change in spectral power of the local field potential in the hippocampal area CA1 before, during, and after a whisking-based discrimination task. (B) Summary index of the increase in coherence between the band-limited hippocampal θ-rhythm and whisking signals during approach of the rat to the stimulus and subsequent touch. The index reports sin(ϕHϕW)2+cos(ϕHϕW)2, where ɸH and ɸW are the instantaneous phase of the hippocampal and whisking signals, respectively, and averaging is over all trials and animals. (C) Summary indices of the increase in coherence between the band-limited hippocampal θ-rhythm and the spiking signal in the vibrissa primary sensory cortex (“barrel cortex”). The magnitude of the index for each neuron is plotted versus phase in the θ-rhythm. The arrows show the concentration of units around the mean phase—black arrows for the vector average across only neurons with significant phase locking (solid circles) and gray arrows for the vector average across all neurons (open and closed circles). The concurrent positions of the vibrissae are indicated. The vector average is statistically significant only for the approach (p < 0.0001) and touch (p = 0.04) epochs.The second finding by Grion, Akrami, Zuo, Stella, and Diamond [32] addresses the relationship between spiking activity in the vibrissa primary sensory cortex and the hippocampal θ-rhythm. The authors find that spiking is essentially independent of the θ-rhythm outside of the task (foraging in Fig 2C), similar to the result for whisking and the θ-rhythm (Fig 2B). They observe strong coherence between spiking and the θ-rhythm during the 0.5 s epoch when the animal approaches the textures (approach in Fig 2C), yet reduced (but still significant) coherence during the touch epoch (touch in Fig 2C). The latter result is somewhat surprising, given past work from a number of laboratories that observe spiking in the primary sensory cortex and whisking to be weakly yet significantly phase-locked during exploratory whisking [3337]. Perhaps overtraining leads to only a modest need for the transfer of sensory information to the hippocampus. Nonetheless, these data establish that phase locking of hippocampal and sensory cortical activity is essentially confined to the epoch of sensory gathering.Given the recent finding of a one-to-one locking of whisking and sniffing [24], we expect to find direct evidence for the phase locking of sniffing and the θ-rhythm. Early work indeed reported such phase locking [38] but, as in the case of whisking [29], this may have been a consequence of too small a sample and, thus, inadequate statistical power. However, Macrides, Eichenbaum, and Forbes [39] reexamined the relationship between sniffing and the hippocampal θ-rhythm before, during, and after animals sampled an odorant in a forced-choice task. They found evidence that the two rhythms phase-lock within approximately one second of the sampling epoch. We interpret this locking to be similar to that seen in the study by Diamond and colleagues (Fig 2B) [32]. All told, the combined data for sniffing and whisking by the aroused rodent, as accumulated across multiple laboratories, suggest that two oscillatory circuits—the supramammillary nucleus and medial septum complex that drives the hippocampal θ-rhythm and the preBötzinger complex that drives inspiration and paces the whisking oscillator during sniffing (Fig 1)—can phase-lock during epochs of gathering sensory information and likely sustain working memory.What anatomical pathway can lead to phase locking of these two oscillators? The electrophysiological study of Tsanov, Chah, Reilly, and O’Mara [9] supports a pathway from the medial septum, which is driven by the supramammillary nucleus, to dorsal pontine nuclei in the brainstem. The pontine nucleus projects to respiratory nuclei and, ultimately, the preBötzinger oscillator (Fig 1). This unidirectional pathway can, in principle, entrain breathing and whisking. Phase locking is not expected to occur during periods of basal breathing, when the breathing rate and θ-rhythm occur at highly incommensurate frequencies. However, it remains unclear why phase locking occurs only during a selected epoch of a discrimination task, whereas breathing and the θ-rhythm occupy the same frequency band during the epochs of approach, as well as touch-based target selection (Fig 2A). While a reafferent pathway provides the rat with information on self-motion of the vibrissae (Fig 1), it is currently unknown whether that information provides feedback for phase locking.A seeming requirement for effective communication between neocortical and hippocampal processing is that phase locking must be achieved at all possible phases of the θ-rhythm. Can multiple phase differences between sensory signals and the hippocampal θ-rhythm be accommodated? Two studies report that the θ-rhythm undergoes a systematic phase-shift along the dorsal–ventral axis of the hippocampus [40,41], although the full extent of this shift is only π radians [41]. In addition, past work shows that vibrissa input during whisking is represented among all phases of the sniff/whisk cycle, at levels from primary sensory neurons [42,43] through thalamus [44,45] and neocortex [3337], with a bias toward retraction from the protracted position. A similar spread in phase occurs for olfactory input, as observed at the levels of the olfactory bulb [46] and cortex [47]. Thus, in principle, the hippocampus can receive, transform, and output sensory signals that arise over all possible phases in the sniff/whisk cycle. In this regard, two signals that are exactly out-of-phase by π radians can phase-lock as readily as signals that are in-phase.What are the constraints for phase locking to occur within the observed texture identification epochs? For a linear system, the time to lock between an external input and hippocampal theta depends on the observed spread in the spectrum of the θ-rhythm. This is estimated as Δf ~3 Hz (half-width at half-maximum amplitude), implying a locking time on the order of 1/Δf ~0.3 s. This is consistent with the approximate one second of enhanced θ-rhythm activity observed in the study by Diamond and colleagues (Fig 2A) [32] and in prior work [39,48] during a forced-choice task with rodents.Does the θ-rhythm also play a role in the gating of output from the hippocampus to areas of the neocortex? Siapas, Lubenov, and Wilson [48] provided evidence that hippocampal θ-rhythm phase-locks to electrical activity in the medial prefrontal cortex, a site of sensory integration as well as decision-making. Subsequent work [4951] showed that the hippocampus drives the prefrontal cortex, consistent with the known unidirectional connectivity between Cornu Ammonis area 1 (CA1) of the hippocampus and the prefrontal cortex [11] (Fig 1). Further, phase locking of hippocampal and prefrontal cortical activity is largely confined to the epoch of decision-making, as opposed to the epoch of sensory gathering. Thus, over the course of approximately one second, sensory information flows into and then out of the hippocampus, gated by phase coherence between rhythmic neocortical and hippocampal neuronal activity.It is of interest that the medial prefrontal cortex receives input signals from sensory areas in the neocortex [52] as well as a transformed version of these input signals via the hippocampus (Fig 1). Yet it remains to be determined if this constitutes a viable hub for the comparison of the original and transformed signals. In particular, projections to the medial prefrontal cortex arise from the ventral hippocampus [2], while studies on the phase locking of hippocampal θ-rhythm to prefrontal neocortical activity were conducted in dorsal hippocampus, where the strength of the θ-rhythm is strong compared to the ventral end [53]. Therefore, similar recordings need to be performed in the ventral hippocampus. An intriguing possibility is that the continuous phase-shift of the θ-rhythm along the dorsal to the ventral axis of the hippocampus [40,41] provides a means to encode the arrival of novel inputs from multiple sensory modalities relative to a common clock.A final issue concerns the locking between sensory signals and hippocampal neuronal activity in species that do not exhibit a continuous θ-rhythm, with particular reference to bats [5456] and primates [5760]. One possibility is that only the up and down swings of neuronal activity about a mean are important, as opposed to the rhythm per se. In fact, for animals in which orofacial input plays a relatively minor role compared to rodents, such a scheme of clocked yet arrhythmic input may be a necessity. In this case, the window of processing is set by a stochastic interval between transitions, as opposed to the periodicity of the θ-rhythm. This may imply that up/down swings of neuronal activity may drive hippocampal–neocortical communications in all species, with communication mediated via phase-locked oscillators in rodents and via synchronous fluctuations in bats and primates. The validity of this scheme and its potential consequence on neuronal computation remains an open issue and a focus of ongoing research.  相似文献   

17.
Microorganisms have been cooperating with each other for billions of years: by sharing resources, communicating with each other, and joining together to form biofilms and other large structures. These cooperative behaviors benefit the colony as a whole; however, they may be costly to the individuals performing them. This raises the question of how such cooperation can arise from natural selection. Mathematical modeling is one important avenue for exploring this question. Evolutionary experiments are another, providing us with an opportunity to see evolutionary dynamics in action and allowing us to test predictions arising from mathematical models. A new study in this issue of PLOS Biology investigates the evolution of a cooperative resource-sharing behavior in yeast. Examining the competition between cooperating and “cheating” strains of yeast, the authors find that, depending on the initial mix of strains, this yeast society either evolves toward a stable coexistence or collapses for lack of cooperation. Using a simple mathematical model, they show how these dynamics arise from eco-evolutionary feedback, where changes in the frequencies of strains are coupled with changes in population size. This study and others illustrate the combined power of modeling and experiment to elucidate the origins of cooperation and other fundamental questions in evolutionary biology.How much cooperation does it take to maintain a society? Many biological populations, from microbes to insects to humans, depend on the cooperation of their members in order to access resources, raise offspring, and avoid danger. Yet in any cooperative activity, there is the risk of “cheaters,” who benefit from the generosity of others while making no contribution of their own. Consider, for example, the layabout in a communal household who refuses to cook or clean dishes. If this cheating behavior spreads through the population, the society as a whole may collapse.Evolutionary biologists since Darwin have been fascinated by how populations can overcome this dilemma. Studying this question can be challenging. While the products of evolution are evident in the natural world, the process that produced them is mostly hidden from view. As a consequence, direct observation of the evolution of cooperation in action is often limited.Much of our current understanding of this conundrum arises from mathematical modeling. Ever since the birth of population genetics about a century ago, it has been recognized that the theory of evolution can be set upon exact mathematical foundations. This approach has flourished ever since, and especially in the last few decades. The theory of choice to study social phenomena is evolutionary game theory [1][5], in which behaviors that affect others are represented as strategies. Simple mathematical models describe the dynamics of these strategies under mutation and selection, depending on the population structure [6][12]. Applied to the problem of cooperation, these models show that if a cooperating individual receives some of the benefit of his or her own labors—as in Snowdrift games or some nonlinear public goods games—then evolutionary dynamics may lead to an equilibrium in which cooperators and cheaters coexist [1],[13]. On the other hand, if benefits accrue only to others—as in Prisoners'' Dilemma games—then cooperation is expected to disappear unless some mechanism is present to support it [14].Recently, experiments with microbes have afforded us an unprecedented opportunity to observe evolution in action [15][20]. Bacteria, yeast, and other single-celled organisms divide rapidly enough that evolutionary change—the arrival and fixation of beneficial mutations—can be observed in the laboratory. Moreover, the experimenter is able to control the population size, environmental conditions, and other variables, and can therefore test hypotheses regarding how the course of evolution depends on these variables. Experimenters can also preserve specimens of the population from all phases of its evolution as a “living record” of genotypic and phenotypic change. In short, experiments with microbes are a powerful tool for testing evolutionary hypotheses.Microorganism experiments hold particular promise for shedding light on how cooperative behaviors emerge from evolution [21][26]. Microbial species cooperate in a variety of ways: They form biofilms, produce iron-scavenging agents, produce chemicals to resist antibiotics, and form fruiting bodies when local resources are depleted. By mixing wild-type strains that display a particular cooperative behavior with “cheater” mutants that do not, researchers can test hypotheses about what conditions favor wild-type “cooperators” over cheaters.In one such experiment, Gore et al. [26] studied a social dilemma in the yeast Saccharomyces cerevisiae. The preferred nutrient sources for this yeast are the simple sugars glucose and fructose; however, it can subsist on the compound sugar sucrose by producing the enzyme invertase, which breaks down sucrose into glucose and fructose. A crucial point is that, since this reaction occurs near the cell wall, only about 1% of these simple sugars are captured by the cell in which they are produced. The remaining 99% diffuse away and are available to other cells. Thus producing invertase is a cooperative behavior, with the bulk of the benefit going to cells other than the producer. Moreover, this cooperation is costly, in that the production of invertase carries a metabolic cost to the producer. To study the evolution of this behavior, Gore et al. created cheater strains that do not produce invertase, and thereby avoid the associated cost. Letting these strains compete with each other, they found that, in most cases, cooperator and cheater strains converged to an equilibrium in which both strains coexisted—a result consistent with theoretical predictions regarding Snowdrift games and nonlinear public goods games [1],[13].Much theoretical work on the evolution of cooperation and other traits has assumed, for the sake of simplicity, that the population size remains roughly constant while the strains in question are competing. However, it is entirely possible that population dynamics—changes in population size—may occur on the same timescale as evolutionary dynamics—changes in the frequencies of competing types. In this case, these two dynamical processes may affect one another, a phenomenon known as eco-evolutionary feedback [27][30]. Mathematical modeling has shown that eco-evolutionary feedback may lead to a variety of complex dynamical behaviors, including multiple equilibria, cycling, chaos, and Turing patterns [28],[30][33].In this issue of PLOS Biology, Sanchez and Gore [34] have—for the first time, to our knowledge—empirically demonstrated eco-evolutionary feedback in the evolution of cooperation. Using the yeast system described above, the authors studied the coupled dynamics of the population density and the proportion of cooperator types within the population. The mechanism for eco-evolutionary feedback in this system is intuitive: the growth of the population as a whole depends on the concentration of simple sugars, which in turn depends on the density of cooperators. If there are insufficient cooperators, the overall population density declines. With low population density, cooperators have an advantage due to the simple sugars they manage to retain for themselves. At this point, cooperators increase in frequency, and the concentration of simple sugars increases, leading to overall population growth. But once this happens, cheaters proliferate faster than cooperators due to their lower metabolic costs. This in turn depresses the frequency of cooperators, and the cycle repeats itself. We would therefore expect to see cycling or spiraling behavior in the eco-evolutionary dynamics of these types, consistent with theoretical predictions [32],[33].In their experiment, Sanchez and Gore observed not only spiraling, but also bistability—the presence of two equilibria to which the system might converge, depending on the initial conditions [35]. If the initial population density and/or the initial proportion of cooperators is too low, not enough simple sugars are produced and the population collapses. On the other hand, if there are sufficiently many cooperators in the initial population, the population converges in spiraling fashion to an equilibrium in which population density is high and cooperators and cheaters coexist (Figure 1). To complement their experiment, the authors developed a simple Lotka-Volterra–type model describing the interdependent growth of the competing strains. This model reproduces the observed eco-evolutionary dynamics with remarkable fidelity, given its simplicity.Open in a separate windowFigure 1Dynamics of eco-evolutionary feedback in cooperator and cheater strains of the yeast S. cerevisiae , as observed in the experiment of Sanchez and Gore.There are two basins of attraction, with a different outcome expected from each. If there are too few cooperators to start, not enough simple sugars are produced and the population collapses. On the other hand, if the initial number of cooperators is sufficient, the system converges in spiraling fashion to an equilibrium in which cooperators and cheaters coexist.Interestingly, the proportion of cooperators in the coexistence equilibrium is low—less than 10%—but is nonetheless sufficient to maintain the viability of the population. Does the predominance of cheaters in this equilibrium hurt the population as a whole? The authors found that the overall density and productivity of the population in the coexistence equilibrium is not much less than what cooperators would achieve in the absence of cheaters. However, the predominance of cheaters does impact the population''s resilience to an ecological shock—in this case, rapid and significant dilution of the population. Cooperators in monomorphic equilibrium survive this shock, but populations in mixed equilibrium between cooperators and cheaters do not. In short, mixed populations are comparably productive to, but significantly less resilient than, cooperator-only populations.The study of Sanchez and Gore illustrates the synergistic power of theory and experiment when carefully combined. The opportunities for further such combinations are immense. Population genetics and evolutionary game theory have provided us with a wealth of testable hypotheses about evolution, and we now have the experimental technology to test them. Some of the most interesting hypotheses regard the effect of spatial structure on the evolution of cooperation. Well-known results in evolutionary game theory show that spatial structure can promote cooperation [6],[36][39], though this effect depends strongly on the details of spatial reproduction and replacement [40]. Thus far, experimental studies have addressed this question only indirectly, with reduced pathogen virulence representing an indirect form of cooperation [41], or with group subdivision standing in for spatial structure [22],[23]. The effects of spatial structure on the evolution of cooperation in microbial colonies remains an important open question.At the same time, we must also allow experimental results to inform the development of new mathematical models. The field of social bacterial evolution requires well-defined, simple models that describe how populations of bacteria change over time, taking into account the reproductive events, social interactions, and population structures particular to these populations. This approach ultimately brings together the methods of population genetics, evolutionary game theory, ecology, and experimental microbiology.  相似文献   

18.
With the increasing appreciation for the crucial roles that microbial symbionts play in the development and fitness of plant and animal hosts, there has been a recent push to interpret evolution through the lens of the “hologenome”—the collective genomic content of a host and its microbiome. But how symbionts evolve and, particularly, whether they undergo natural selection to benefit hosts are complex issues that are associated with several misconceptions about evolutionary processes in host-associated microbial communities. Microorganisms can have intimate, ancient, and/or mutualistic associations with hosts without having undergone natural selection to benefit hosts. Likewise, observing host-specific microbial community composition or greater community similarity among more closely related hosts does not imply that symbionts have coevolved with hosts, let alone that they have evolved for the benefit of the host. Although selection at the level of the symbiotic community, or hologenome, occurs in some cases, it should not be accepted as the null hypothesis for explaining features of host–symbiont associations.The ubiquity and importance of microorganisms in the lives of plants and animals are ever more apparent, and increasingly investigated by biologists. Suddenly, we have the aspiration and tools to open up a new, complicated world, and we must confront the realization that almost everything about larger organisms has been shaped by their history of evolving from, then with, microorganisms [1]. This development represents a dramatic shift in perspective—arguably a revolution—in modern biology.Do we need to revamp basic tenets of evolutionary theory to understand how hosts evolve with associated microorganisms? Some scientists have suggested that we do [2], and the recently introduced terms “holobiont” and “hologenome” encapsulate what has been described as an “emerging postmodern synthesis” [3]. Holobiont was initially used to refer to a host and a single inherited symbiont [4] but was later extended to a host and its community of associated microorganisms, specifically for the case of corals [5]. The idea of the holobiont is that a host and its associated microorganisms must be considered as an integrated unit in order to understand many biological and ecological features.The later introduction of the term hologenome [2,6,7] sought to describe a holobiont by its genetic composition. The term has been used in different ways by different authors, but in most contexts a hologenome is considered a genetic unit that represents the combined genomes of a host and its associated microorganisms [8]. This non-controversial definition of hologenome is linked to the idea that this entity has a role in evolution. For example, Gordon et al. [1,9] state, "The genome of a holobiont, termed the hologenome, is the sum of the genomes of all constituents, all of which can evolve within that context." That last phrase is sufficiently general that it can be interpreted in any number of ways. Like physical conditions, associated organisms can be considered as part of the environment and thus can be sources of natural selection, affecting evolution in each lineage.But a more sweeping and problematic proposal is given by originators of the term, which is that "the holobiont with its hologenome should be considered as the unit of natural selection in evolution" [2,7] or by others, that “an organism’s genetics and fitness are inclusive of its microbiome” [3,4]. The implication is that differential success of holobionts influences evolution of participating organisms, such that their observed features cannot be fully understood without considering selection at the holobiont level. Another formulation of this concept is the proposal that the evolution of host–microbe systems is “most easily understood by equating a gene in the nuclear genome to a microbe in the microbiome” [8]. Under this view, interactions between host and microbial genotypes should be considered as genetic epistasis (interactions among alleles at different loci in a genome) rather than as interactions between the host’s genotype and its environment.While biologists would agree that microorganisms have important roles in host evolution, this statement is a far cry from the claim that they are fused with hosts to form the primary units of selection, or that hosts and microorganisms provide different portions of a unified genome. Broadly, the hologenome concept contends, first, that participating lineages within a holobiont affect each other’s evolution, and, second, that that the holobiont is a primary unit of selection. Our aim in this essay is to clarify what kinds of evidence are needed for each of these claims and to argue that neither should be assumed without evidence. We point out that some observations that superficially appear to support the concept of the hologenome have spawned confusion about real biological issues (Box 1).

Box 1. Misconceptions Related to the Hologenome Concept

Misconception #1: Similarities in microbiomes between related host species result from codiversification. Reality: Related species tend to be similar in most traits. Because microbiome composition is a trait that involves living organisms, it is tempting to assume that these similarities reflect a shared evolutionary history of host and symbionts. This has been shown to be the case for some symbioses (e.g., ancient maternally inherited endosymbionts in insects). But for many interactions (e.g., gut microbiota), related hosts may have similar effects on community assembly without any history of codiversification between the host and individual microbial species (Fig 1B).Open in a separate windowFig 1Alternative evolutionary processes can result in related host species harboring similar symbiont communities.Left panel: Individual symbiont lineages retain fidelity to evolving host lineages, through co-inheritance or other mechanisms, with some gain and loss of symbiont lineages over evolutionary time. Right panel: As host lineages evolve, they shift their selectivity of environmental microbes, which are not evolving in response and which may not even have been present during host diversification. In both cases, measures of community divergence will likely be smaller for more closely related hosts, but they reflect processes with very different implications for hologenome evolution. Image credit: Nancy Moran and Kim Hammond, University of Texas at Austin. Misconception #2: Parallel phylogenies of host and symbiont, or intimacy of host and symbiont associations, reflect coevolution. Reality: Coevolution is defined by a history of reciprocal selection between parties. While coevolution can generate parallel phylogenies or intimate associations, these can also result from many other mechanisms. Misconception #3: Highly intimate associations of host and symbionts, involving exchange of cellular metabolites and specific patterns of colonization, result from a history of selection favoring mutualistic traits. Reality: The adaptive basis of a specific trait is difficult to infer even when the trait involves a single lineage, and it is even more daunting when multiple lineages contribute. But complexity or intimacy of an interaction does not always imply a long history of coevolution nor does it imply that the nature of the interaction involves mutual benefit. Misconception #4: The essential roles that microbial species/communities play in host development are adaptations resulting from selection on the symbionts to contribute to holobiont function. Reality: Hosts may adapt to the reliable presence of symbionts in the same way that they adapt to abiotic components of the environment, and little or no selection on symbiont populations need be involved. Misconception #5: Because of the extreme importance of symbionts in essential functions of their hosts, the integrated holobiont represents the primary unit of selection. Reality: The strength of natural selection at different levels of biological organization is a central issue in evolutionary biology and the focus of much empirical and theoretical research. But insofar as there is a primary unit of selection common to diverse biological systems, it is unlikely to be at the level of the holobiont. In particular cases, evolutionary interests of host and symbionts can be sufficiently aligned such that the predominant effect of natural selection on genetic variation in each party is to increase the reproductive success of the holobiont. But in most host–symbiont relationships, contrasting modes of genetic transmission will decouple selection pressures.  相似文献   

19.
How often do people visit the world’s protected areas (PAs)? Despite PAs covering one-eighth of the land and being a major focus of nature-based recreation and tourism, we don’t know. To address this, we compiled a globally-representative database of visits to PAs and built region-specific models predicting visit rates from PA size, local population size, remoteness, natural attractiveness, and national income. Applying these models to all but the very smallest of the world’s terrestrial PAs suggests that together they receive roughly 8 billion (8 x 109) visits/y—of which more than 80% are in Europe and North America. Linking our region-specific visit estimates to valuation studies indicates that these visits generate approximately US $600 billion/y in direct in-country expenditure and US $250 billion/y in consumer surplus. These figures dwarf current, typically inadequate spending on conserving PAs. Thus, even without considering the many other ecosystem services that PAs provide to people, our findings underscore calls for greatly increased investment in their conservation.Enjoyment of nature, much of it in protected areas (PAs), is recognised as the most prominent cultural ecosystem service [13], yet we still lack even a rough understanding of its global magnitude and economic significance. Large-scale assessments have been restricted to regional or biome-specific investigations [48] (but see [9]). There are good reasons for this. Information on visit rates is limited, widely scattered, and confounded by variation in methods [10,11]. Likewise, estimates of the value of visits vary greatly—geographically, among methods, and depending on the component of value being measured [1214]. Until now, these problems have prevented data-driven analysis of the worldwide scale of nature-based recreation and tourism. But with almost all the world’s governments committed (through the Aichi Biodiversity Targets [15]) to integrating biodiversity into national accounts, policymakers require such gaps in our knowledge of natural capital to be filled.We tackled this shortfall in our understanding of a major ecosystem service by focusing on terrestrial PAs, which cover one-eighth of the land [16] and are a major focus of nature-based recreation and tourism. We compiled data on visit rates to over 500 PAs and built region-specific models, which predicted variation in visitation in relation to the properties of PAs and to local socioeconomic conditions. Next, we used these models to estimate visit rates to all but the smallest of the world’s terrestrial PAs. Last, by summing these estimates by region and combining the totals with region-specific medians for the value of nature visits obtained from the literature, we derived approximate estimates of the global extent and economic significance of PA visitation.Given the scarcity of data on visits to PAs, our approach was to use all available information (although we excluded marine and Antarctic sites, and International Union for Conservation of Nature (IUCN) Category I PAs where tourism is typically discouraged; for further details of data collection and analysis see Materials and Methods). This generated a database of visitor records for 556 PAs spread across 51 countries and included 2,663 records of annual visit numbers over our best-sampled ten-year period (1998–2007) (S1 Table). Mean annual visit rates for individual PAs in this sample ranged from zero to over 10 million visits/y, with a median across all sampled PAs of 20,333 visits/y.We explored this variation by modelling it in relation to a series of biophysical and socioeconomic variables that might plausibly predict visit rates (after refs [6,7,17]): PA size, local population size, PA remoteness, a simple measure of the attractiveness of the PA’s natural features, and national income (see Materials and Methods for a priori predictions). For each of five major regions, we performed univariate regressions (S2 Table) and then built generalised linear models (GLMs) in an effort to predict variation in observed visit rates. While the GLMs had modest explanatory power within regions (S3 Table), together they accounted for 52.9% of observed global variation in visit rates. Associations with individual GLM variables—controlling for the effects of other variables—differed regionally in their strength but broadly matched our predictions (S1 Fig.). Visit rates increased with local population size (in Europe), decreased with remoteness (everywhere apart from Asia/Australasia), increased with natural attractiveness (in North and Latin America), and increased with national income (everywhere else). Controlling for these variables, visit rates were highest in North America, lower in Asia/Australasia and Europe, and lowest in Africa and Latin America.To quantify how often people visit PAs as a whole, we used our region-specific GLMs to estimate visit rates to 94,238 sites listed in the World Database on Protected Areas (WDPA) [18]). We again excluded marine, Antarctic, and Category I PAs, as well as almost 40,000 extremely small sites which were below the size (10 ha) of the smallest PA in our sample (S2 Fig.). The limited power of our GLMs and significant errors in the WDPA mean our estimates of visit rates should be treated with caution for individual sites or (when aggregated to national level) for smaller countries. However, the larger-scale patterns they reveal are marked. Estimated median visit rates per PA (averaged within countries) are lowest in Africa (at around 3,000/y) and Latin America (4,000/y), and greatest in North America (350,000/y) (S3 Table). When visit rates are aggregated across all PAs within a country, pronounced regional differences in the numbers of PAs (with relatively few in Africa and Latin America) magnify these patterns and indicate that while many African countries have <100,000 PA visits/y, PAs in the United States receive a combined total of over 3 billion visits/y (Fig. 1). This variation is underscored when aggregate PA visit rates are standardised by the annual number of non-workdays and total population size of each region: across Europe we reckon there are ~5 PA visits/100 non-work person-days; for North America, the figure is ~10 visits/100 non-work person-days respectively, while for each other region our estimates are <0.3 visits/100 non-work person-days.Open in a separate windowFig 1Estimated total PA visit rates for each country.Totals (which are log10-transformed) were derived by applying the relevant regional GLM (S3 Table) to all of a country’s terrestrial PAs (excluding those <10 ha, and marine and IUCN Category I PAs) listed in the WDPA [18]. Asterisks show countries for which we had visit rate observations.Summing our aggregate estimates of PA visits suggests that between them, the world’s terrestrial PAs receive approximately 8 billion visits/y. Of these, we estimate 3.8 billion visits/y are in Europe (where more than half of the PAs in the WDPA are located) and 3.3 billion visits/y are in North America (S3 Table). These numbers are strikingly large. However, given our confidence intervals (95% CIs for the global total: 5.4–18.5 billion/y) and considering several conservative aspects of our calculations (e.g., the exclusion of ~40,000 very small sites and the incomplete nature of the WDPA), we consider it implausible that there are fewer than 5 billion PA visits worldwide each year. Three national estimates support this view: 2.5 billion visitdays/y to US PAs in 1996 [4], >1 billion visits/y (albeit many of them cultural rather than nature-based) to China’s National Parks in 2006 [19], and 3.2–3.9 billion visits/y to all British “ecosystems” (most of which are not in PAs) in 2010 [7].Finally, what can be inferred about the economic significance of visits on this scale? Economists working on tourism distinguish two main, non-overlapping components of value [12]: direct expenditure by visitors (an element of economic impact, calculated from spending on fees, travel, accommodation, etc.); and consumer surplus (a measure of economic value which arises because many visitors would be prepared to pay more for their visit than they actually have to, and which is defined as the difference between what visitors would be prepared to pay for a visit and what they actually spend; consumer surplus is typically quantified using travel cost or contingent valuation methods). We conducted an extensive literature search to derive median (but conservative) figures for each type of value for each region (S4 Table). Applying these to our corresponding estimates of visit rates and summing across regions yields an estimate of global gross direct expenditure associated with PA visits (within-country only, and excluding indirect and induced expenditure) of ~US $600 billion/y worldwide (at 2014 prices). The corresponding figure for global consumer surplus is ~US $250 billion/y.Such numbers are unavoidably imprecise. Uncertainty in our modelled visit rates and the wide variation in published estimates of expenditure and consumer surplus mean that they could be out by a factor of two or more. However, comparison with calculations that visits to North American PAs alone have an economic impact of $350–550 billion/y [4] and that direct expenditure on all travel and tourism worldwide runs at $2,000 billion/y [20] suggests our figures are of the correct order of magnitude, and that the value of PA visitation runs into hundreds of billions of dollars annually.These results quantify, we believe for the first time, the scale of visits to the world’s PAs and their approximate economic significance. We currently spend <$10 billion/y in safeguarding PAs [21]—a figure which is widely regarded as grossly insufficient [2125]. Even without considering the many other benefits which PAs provide [22], our estimates of the economic impact and value of PA visitation dwarf current expenditure—highlighting the risks of underinvestment in conservation, and suggesting substantially increased investments in protected area maintenance and expansion would yield substantial returns.  相似文献   

20.
Social hierarchy is a fact of life for many animals. Navigating social hierarchy requires understanding one''s own status relative to others and behaving accordingly, while achieving higher status may call upon cunning and strategic thinking. The neural mechanisms mediating social status have become increasingly well understood in invertebrates and model organisms like fish and mice but until recently have remained more opaque in humans and other primates. In a new study in this issue, Noonan and colleagues explore the neural correlates of social rank in macaques. Using both structural and functional brain imaging, they found neural changes associated with individual monkeys'' social status, including alterations in the amygdala, hypothalamus, and brainstem—areas previously implicated in dominance-related behavior in other vertebrates. A separate but related network in the temporal and prefrontal cortex appears to mediate more cognitive aspects of strategic social behavior. These findings begin to delineate the neural circuits that enable us to navigate our own social worlds. A major remaining challenge is identifying how these networks contribute functionally to our social lives, which may open new avenues for developing innovative treatments for social disorders.
“Observing the habitual and almost sacred ‘pecking order’ which prevails among the hens in his poultry yard—hen A pecking hen B, but not being pecked by it, hen B pecking hen C and so forth—the politician will meditate on the Catholic hierarchy and Fascism.” —Aldous Huxley, Point Counter Point (1929)
From the schoolyard to the boardroom, we are all, sometimes painfully, familiar with the pecking order. First documented by the Norwegian zoologist Thorleif Schjelderup-Ebbe in his PhD thesis on social status in chickens in the 1920s, a pecking order is a hierarchical social system in which each individual is ranked in order of dominance [1]. In chickens, the top hen can peck all lower birds, the second-ranking bird can peck all birds ranked below her, and so on. Since it was first coined, the term has become widely applied to any such hierarchical system, from business, to government, to the playground, to the military.Social hierarchy is a fact of life not only for humans and chickens but also for most highly social, group-living animals. Navigating social hierarchies and achieving dominance often appear to require cunning, intelligence, and strategic social planning. Indeed, the Renaissance Italian politician and writer Niccolo Machiavelli argued in his best-known book “The Prince” that the traits most useful for attaining and holding on to power include manipulation and deception [2]. Since then, the term “Machiavellian” has come to signify a person who deceives and manipulates others for personal advantage and power. 350 years later, Frans de Waal applied the term Machiavellian to social maneuvering by chimpanzees in his book Chimpanzee Politics [3]. De Waal argued that chimpanzees, like Renaissance Italian politicians, apply guile, manipulation, strategic alliance formation, and deception to enhance their social status—in this case, not to win fortune and influence but to increase their reproductive success (which is presumably the evolutionary origin of status-seeking in Renaissance Italian politicians as well).The observation that navigating large, complex social groups in chimpanzees and many other primates seems to require sophisticated cognitive abilities spurred the development of the social brain hypothesis, originally proposed to explain why primates have larger brains for their body size than do other animals [4],[5]. Since its first proposal, the social brain hypothesis has accrued ample evidence endorsing the connections between increased social network complexity, enhanced social cognition, and larger brains. For example, among primates, neorcortex size, adjusted for the size of the brain or body, varies with group size [6],[7], frequency of social play [8], and social learning [9].Of course, all neuroscientists know that when it comes to brains, size isn''t everything [10]. Presumably social cognitive functions required for strategic social behavior are mediated by specific neural circuits. Here, we summarize and discuss several recent discoveries, focusing on an article by Noonan and colleagues in the current issue, which together begin to delineate the specific neural circuits that mediate our ability to navigate our social worlds.Using structural magnetic resonance imaging (MRI), Bickart and colleagues showed that the size of the amygdala—a brain nucleus important for emotion, vigilance, and rapid behavioral responses—is correlated with social network size in humans [11]. Subsequent studies showed similar relationships for other brain regions implicated in social function, including the orbitofrontal cortex [12] and ventromedial prefrontal cortex [13]. Indeed, one study even found an association between grey matter density in the superior temporal sulcus (STS) and temporal gyrus and an individual''s number of Facebook friends [14]. Collectively, these studies suggest that the number and possibly the complexity of relationships one maintains varies with the structural organization of a specific network of brain regions, which are recruited when people perform tests of social cognition such as recognizing faces or inferring others'' mental states [15],[16]. These studies, however, do not reveal whether social complexity actively changes these brain areas through plasticity or whether individual differences in the structure of these networks ultimately determines social abilities.To address this question, Sallet and colleagues experimentally assigned rhesus macaques to social groups of different sizes and then scanned their brains with MRI [17]. The authors found significant positive associations between social network size and morphology in mid-STS, rostral STS, inferior temporal (IT) gyrus, rostral prefrontal cortex (rPFC), temporal pole, and amygdala. The authors also found a different region in rPFC that scaled positively with social rank; as grey matter in this region increased, so did the monkey''s rank in the hierarchy. As in the human studies described previously, many of these regions are implicated in various aspects of social cognition and perception [18]. These findings endorse the idea that neural plasticity is engaged in specifically social brain areas in response to the demands of the social environment, changing these areas structurally according to an individual''s experiences with others.Sallet and colleagues also examined spontaneous coactivation among these regions using functional MRI (fMRI). Measures of coactivation are thought to reflect coupling between regions [19],[20]; these measures are observable in many species [21],[22] and vary according to behavior [23],[24], genetics [25], and sex [26], suggesting that coactivation may underlie basic neural function and interaction between brain regions. The authors found that coactivation between the STS and rPFC increased with social network size and that coactivation between IT and rPFC increased with social rank. These findings show that not only do structural changes occur in these regions to meet the demands of the social environment but these structural changes mediate changes in function as well.One important question raised by the study by Sallet and colleagues is whether changes in the structure and function of social brain areas are specific outcomes of social network size or of dealing with social hierarchy. After all, larger groups offer more opportunity for a larger, more despotic pecking order. In the current volume, Noonan and colleagues address this question directly by examining the structural and functional correlates of social status in macaques independently of social group size [27]. The authors collected MRI scans from rhesus macaques and measured changes in grey matter associated with social dominance. By scanning monkeys of different ranks living in groups of different sizes, the authors were able to cleave the effects of social rank from those of social network size (Figure 1).Open in a separate windowFigure 1Brain regions in rhesus macaques related to social environment.Primary colors indicate brain regions in which morphometry tracks social network size. Pastel colors indicate brain regions in which morphometry tracks social status in the hierarchy. Regions of interest adapted from [48], overlaid on Montreal Neurological Institute (MNI) macaque template [49].The authors found a network of regions in which grey matter measures varied with social rank; these regions included the bilateral central amygdala, bilateral brainstem (between the medulla and midbrain, including parts of the raphe nuclei), and hypothalamus, which varied positively with dominance, and regions in the basal ganglia, which varied negatively with social rank. These regions have been implicated in social rank functions across a number of species [28][32]. Importantly, these relationships were unique to social status. There was no relationship between grey matter in these subcortical areas and social network size, endorsing a specific role in social dominance-related behavior. Nevertheless, grey matter in bilateral mid-STS and rPFC varied with both social rank and social network size, as reported previously. These findings demonstrate that specific brain areas uniquely mediate functions related to social hierarchy, whereas others may subserve more general social cognitive processes.Noonan and colleagues next probed spontaneous coactivation using fMRI to examine whether functional coupling between any of these regions varied with social status. They found that the more subordinate an animal, the stronger the functional coupling between multiple regions related to dominance. These results suggest that individual differences in social status are functionally observable in the brain even while the animal is at rest and not engaged in social behavior. These findings suggest that structural changes associated with individual differences in social status alter baseline brain function, consistent with the idea that the default mode of the brain is social [33] and that the sense of self and perhaps even awareness emerge from inwardly directed social reasoning [34].These findings resonate with previous work on the neural basis of social dominance in other vertebrates. In humans, for example, activity in the amygdala tracks knowledge of social hierarchy [28],[35] and, further, shows activity patterns that uniquely encode social rank and predict relevant behaviors [28]. Moreover, recent research has identified a specific region in the mouse hypothalamus, aptly named the “hypothalamic attack area” [36],[37]. Stimulating neurons in this area immediately triggers attacks on other mice and even an inflated rubber glove, while inactivating these neurons suppresses aggression [38]. In the African cichlid fish Haplochromis burtoni, a change in the social status of an individual male induces a reversible change in the abundance of specialized neurons in the hypothalamus that communicate hormonally with the pituitary and gonads [39]. Injections of this hormone in male birds after an aggressive territorial encounter amplifies the normal subsequent rise in testosterone [40]. Serotonin neurons in the raphe area of the brainstem also contribute to dominance-related behaviors in fish [29],[31] and aggression in monkeys [41].Despite these advances, there are still gaps in our understanding of how these circuits mediate status-related behaviors. Though regions in the amygdala, brainstem, and hypothalamus vary structurally and functionally with social rank, it remains unknown precisely how they contribute to or respond to social status. For example, though amygdala function and structure correlates with social status in both humans and nonhuman primates [27],[28],[35],[42], it remains unknown which aspects of dominance this region contributes to or underlies. One model suggests that the amygdala contributes to learning or representing one''s own status within a social hierarchy [28],[35]. Alternatively, the amygdala could contribute to behaviors that support social hierarchy, including gaze following [43] and theory of mind [44]. Lastly, the amygdala could contribute to social rank via interpersonal behaviors or personality traits, such as aggression [45], grooming [45], or fear responses [46],[47]. Future work will be critical to determine how signals in these regions relate to social status; direct manipulation of these regions, possibly via microstimulation, larger-scale brain stimulation (e.g., transcranial magnetic stimulation and transcranial direct current stimulation), or temporary lesions, will be critical to better understand these relationships.The work by Noonan and colleagues suggests new avenues for exploring how the brain both responds to and makes possible social hierarchy in nonhuman primates and humans. The fact that the neural circuits mediating dominance and social networking behavior can be identified and measured from structural and functional brain scans even at rest suggests the possibility that similar measures can be made in humans. Although social status is much more complex in people than it is in monkeys or fish, it is just as critical for us and most likely depends on shared neural circuits. Understanding how these circuits work, how they develop, and how they respond to the local social environment may help us to understand and ultimately treat disorders, like autism, social anxiety, or psychopathy, that are characterized by impaired social behavior and cognition.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号