首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
2.
It was recently proposed that long-term population studies be exempted from the expectation that authors publicly archive the primary data underlying published articles. Such studies are valuable to many areas of ecological and evolutionary biological research, and multiple risks to their viability were anticipated as a result of public data archiving (PDA), ultimately all stemming from independent reuse of archived data. However, empirical assessment was missing, making it difficult to determine whether such fears are realistic. I addressed this by surveying data packages from long-term population studies archived in the Dryad Digital Repository. I found no evidence that PDA results in reuse of data by independent parties, suggesting the purported costs of PDA for long-term population studies have been overstated.Data are the foundation of the scientific method, yet individual scientists are evaluated via novel analyses of data, generating a potential conflict of interest between a research field and its individual participants that is manifested in the debate over access to the primary data underpinning published studies [15]. This is a chronic issue but has become more acute with the growing expectation that researchers publish the primary data underlying research reports (i.e., public data archiving [PDA]). Studies show that articles publishing their primary data are more reliable and accrue more citations [6,7], but a recent opinion piece by Mills et al. [2] highlighted the particular concerns felt by some principal investigators (PIs) of long-term population studies regarding PDA, arguing that unique aspects of such studies render them unsuitable for PDA. The "potential costs to science" identified by Mills et al. [2] as arising from PDA are as follows:
  • Publication of flawed research resulting from a "lack of understanding" by independent researchers conducting analyses of archived data
  • Time demands placed on the PIs of long-term population studies arising from the need to correct such errors via, e.g., published rebuttals
  • Reduced opportunities for researchers to obtain the skills needed for field-based data collection because equivalent long-term population studies will be rendered redundant
  • Reduced number of collaborations
  • Inefficiencies resulting from repeated assessment of a hypothesis using a single dataset
Each "potential cost" is ultimately predicated on the supposition that reuse of archived long-term population data is common, yet the extent to which this is true was not evaluated. To assess the prevalence of independent reuse of archived data—and thereby examine whether the negative consequences of PDA presented by Mills et al. [2] may be realised—I surveyed datasets from long-term population studies archived in the Dryad Digital Repository (hereafter, Dryad). Dryad is an online service that hosts data from a broad range of scientific disciplines, but its content is dominated by submissions associated with ecological and evolutionary biological research [8]. I examined all the Dryad packages associated with studies from four journals featuring ecological or evolutionary research: The American Naturalist, Evolution, Journal of Evolutionary Biology, and Proceedings of the Royal Society B: Biological Sciences (the latter referred to hereafter as Proceedings B). These four journals together represent 23.3% of Dryad''s contributed packages (as of early February 2016). Mills et al. [2] refer to short- versus long-term studies but do not provide a definition of this dichotomy. However, the shortest study represented by their survey lasted for 5 years, so I used this as the minimum time span for inclusion in my survey. This cut-off seems reasonable, as it will generally exclude studies resulting from single projects, such that included datasets likely relate to studies resulting from a sustained commitment on the part of researchers—although one included package contains data gathered via “citizen science” [9], and two others contain data derived from archived human population records [10,11]. However, as these datasets cover extended time spans and were used to address ecological questions [1214], they were retained in my survey sample. Following Mills et al. [2], my focus was on population studies conducted in natural (or seminatural) settings, so captive populations were excluded. Because I was assessing the reuse of archived data, I excluded packages published by Dryad after 2013: authors can typically opt to impose a 1-year embargo, and articles based on archived data will themselves take some time to be written and published.Of the 1,264 archived data packages linked to one of the four journals and published on the Dryad website before 2014, 72 were identified as meeting the selection criteria. This sample represents a diverse range of taxa (Fig 1) and is comparable to the 73 studies surveyed by Mills et al. [2], although my methodology permits individual populations to be represented more than once, since the survey was conducted at the level of published articles (S1 Table). Of these 72 data packages, five had long-term embargoes remaining active (three packages with 5-year embargoes [1517]; two packages with 10-year embargoes [18,19]). For two of these [17,19], the time span of the study could not be estimated because this information is not provided in the associated articles [20,21]. For a third package [22], the archived data indicated 10 years were represented (dummy coding was used to disguise factor level identities, including for year), yet the text of the associated paper suggests data collection covered a considerably greater time span [23]. However, since the study period is not stated in the text, I followed the archived data [22] in assuming data collection spanned a 10-year period. The distribution of study time spans is shown in Fig 2.Open in a separate windowFig 1Taxonomic representation of the 72 data packages included in the survey.The number of packages for each taxon is given in parentheses (note: one data package included data describing both insects and plants [9], while other data packages represented multiple species within a single taxonomic category).Open in a separate windowFig 2The study periods of the 70 data packages included in the survey for which this could be calculated.For each year from 2000 to 2004, these four journals contributed no more than a single data package to Dryad between them. However, around the time that the Joint Data Archiving Policy (JDAP; [24]) was adopted by three of these, we see a surge in PDA by ecologists and evolutionary biologists (Fig 3), such that in 2015 these four journals were collectively represented by 709 data packages. Of course, Mills et al. [2] argue against mandatory archiving of primary data for long-term studies in particular. For this subset of articles published in these four journals, the same pattern is observed: prior to adoption of the JDAP, only two data packages associated with long-term studies had been archived in Dryad, but following the implementation of the JDAP as a condition of publication in The American Naturalist, Evolution, and Journal of Evolutionary Biology, there is a rapid increase in the number of data packages being archived, despite the continuing availability of alternative venues should authors wish to avoid the purported costs of PDA as Mills et al. [2] contend. As the editorial policy of Proceedings B has shifted towards an increasingly strong emphasis on PDA (it is now mandatory), there has similarly been an increase in the representation of articles from this journal in Dryad, both overall (Fig 3) and for long-term studies in particular (Fig 4). These observations suggest that authors rarely chose to publicly archive their data prior to the adoption of PDA policies by journals and that uptake of PDA spread rapidly once it became a prerequisite for publication. In this respect, researchers using long-term population studies are no different to those in other scientific fields, despite the assertion by Mills et al. [2] that they are a special case owing to the complexity of their data. In reality, researchers in many other scientific disciplines also seek to identify relationships within complex systems. Within neuroscience, for example, near-identical objections to PDA were raised at the turn of the century [25], while archiving of genetic and protein sequences by molecular biologists has yielded huge advances but was similarly resisted until revised journal policies stimulated a change in culture [1,26].Open in a separate windowFig 3Total number of data packages archived in the Dryad Digital Repository each year for four leading journals within ecology and evolutionary biology.Arrow indicates when the Joint Data Archiving Policy (JDAP) was adopted by Evolution, Journal of Evolutionary Biology, and The American Naturalist. Note that because data packages are assigned a publication date by Dryad prior to journal publication (even if an embargo is imposed), some data packages will have been published in the year preceding the journal publication of their associated article.Open in a separate windowFig 4Publication dates of the 72 data packages from long-term study populations that were included in the survey.A primary concern raised by opponents of PDA is that sharing their data will see them “scooped” by independent researchers [6,8,2730]. To quantify this risk for researchers maintaining long-term population studies, I used the Web of Science (wok.mimas.ac.uk) to search for citations of each data package (as of November 2015). For the 67 Dryad packages that were publicly accessible, none were cited by any article other than that from which it was derived. However, archived data could conceivably have been reused without the data package being cited, so I examined all journal articles that cited the study report associated with each data package (median citation count: 9; range: 0–58). Although derived metrics from the main articles were occasionally included in quantitative reviews [31,32] or formal meta-analyses [33], I again found no examples of the archived data being reused by independent researchers. As a third approach, I emailed the corresponding author(s) listed for each article, to ask if they were themselves aware of any examples. The replies I received (n = 35) confirmed that there were no known cases of long-term population data being independently reused in published articles. The apparent concern of some senior researchers that PDA will see them "collect data for 30 years just to be scooped" [30] thus lacks empirical support. It should also be noted that providing primary data upon request precedes PDA as a condition of acceptance for most major scientific journals [8]. PDA merely serves to ensure that authors meet this established commitment, a step made necessary by the failure rate that is otherwise observed, even after the recent revolution in communications technology [3436]. As my survey shows, in practice the risk of being scooped is a monster under the bed: empirical assessment fails to justify the level of concern expressed. While long-term population studies are unquestionably a highly valuable resource for ecologists [2,3739] and will likely continue to face funding challenges [3739], there is no empirical support for the contention of Mills et al. [2] that PDA threatens their viability, although this situation may deserve reassessment in the future if the adoption of PDA increases within ecology and evolutionary biology. Nonetheless, in the absence of assessments over longer time frames (an inevitable result of the historical reluctance to adopt PDA), my survey results raise doubts over the validity of arguments favouring extended embargoes for archived data [29,40], and particularly the suggestion that multidecadal embargoes should be facilitated for long-term studies [2,41].Authors frequently assert that unique aspects of their long-term study render it especially well suited to addressing particular issues. Such claims contradict the suggestion that studies will become redundant if PDA becomes the norm [2] while simultaneously highlighting the necessity of making primary data available for meaningful evaluation of results. For research articles relying on data collected over several decades, independent replication is clearly impractical, such that reproducibility (the ability for a third party to replicate the results exactly [42]) is rendered all the more crucial. Besides permitting independent validation of the original results, PDA allows assessment of the hypotheses using alternative analytical methods (large datasets facilitate multiple analytical routes to test a single biological hypothesis, which likely contributes to poor reproducibility [43]) and reassessment if flaws in the original methodology later emerge [44]. Although I was not attempting to use archived data to replicate published results, and thus did not assess the contents of each package in detail, at least six packages [10,4549] failed to provide the primary data underlying their associated articles, including a quantitative genetic study [50] for which only pedigree information was archived [47]. This limits exploration of alternative statistical approaches to the focal biological hypothesis and impedes future applications of the data that may be unforeseeable by the original investigators (a classic example being Bumpus'' [51] dataset describing house sparrow survival [52]), but it seems to be a reality of PDA within ecology and evolution at present [53].The "solutions" proffered by Mills et al. [2] are, in reality, alternatives to PDA that would serve to maintain the status quo with respect to data accessibility for published studies (i.e., subject to consent from the PI). This is a situation that is widely recognised to be failing with respect to the availability of studies'' primary data [3436,54]. Indeed, for 19% (13 of 67 nonembargoed studies) of the articles represented in my survey, the correspondence email addresses were no longer active, highlighting how rapidly access to long-term primary data can be passively lost. It is unsurprising, then, that 95% of scientists in evolution and ecology are reportedly in favour of PDA [1]. Yet, having highlighted the value and irreplaceability of data describing long-term population studies, Mills et al. [2] reject PDA in favour of allowing PIs to maintain postpublication control of primary data, going so far as to discuss the possibility of data being copyrighted. Such an attitude risks inviting public ire, since asserting private ownership ignores the public funding that likely enabled data collection, and is at odds with a Royal Society report urging scientists to "shift away from a research culture where data is viewed as a private preserve" [55]. I contend that primary data would better be considered as an intrinsic component of a published article, alongside the report appearing in the pages of a journal that presents the data''s interpretation. In this way, an article would move closer to being a self-contained product of research that is fully accessible and assessable. For issues that can only be addressed using data covering an extended time span [2,3739], excusing long-term studies from the expectation of publishing primary data would potentially render the PIs as unaccountable gatekeepers of scientific consensus. PDA encourages an alternative to this and facilitates a change in the treatment of published studies, from the system of preservation (in which a study''s contribution is fixed) that has been the historical convention, towards a conservation approach (in which support for hypotheses can be reassessed and updated) [56]. Given the fundamentally dynamic nature of science, harnessing the storage potential enabled by the Information Age to ensure a study''s contribution can be further developed or refined in the future seems logical and would benefit both the individual authors (through enhanced citations and reputation) and the wider scientific community.The comparison Mills et al. [2] draw between PIs and pharmaceutical companies in terms of how their data are treated is inappropriate: whereas the latter bear the financial cost of developing a drug, a field study''s costs are typically covered by the public purse, such that the personal risks of a failed project are largely limited to opportunity costs. It is inconsistent to highlight funding challenges [2,37] while simultaneously acting to inhibit maximum value for money being derived from funded studies. Several of the studies represented in the survey by Mills et al. [2] comfortably exceed a 50-year time span, highlighting the possibility that current PIs are inheritors rather than initiators of long-term studies. In such a situation, arguments favouring the rights of the PI to maintain control of postpublication access to primary data are weakened still further, given that the data may be the result of someone else''s efforts. Indeed, given the undoubted value of long-term studies for ecological and evolutionary research [2,37,39], many of Mills et al.''s [2] survey respondents will presumably hope to see these studies continue after their own retirement. Rather than owners of datasets, then, perhaps PIs of long-term studies might better be considered as custodians, such that—to adapt the slogan of a Swiss watchmaker—“you never really own a long-term population study; you merely look after it for the next generation.”  相似文献   

3.
4.
The hippocampus has unique access to neuronal activity across all of the neocortex. Yet an unanswered question is how the transfer of information between these structures is gated. One hypothesis involves temporal-locking of activity in the neocortex with that in the hippocampus. New data from the Matthew E. Diamond laboratory shows that the rhythmic neuronal activity that accompanies vibrissa-based sensation, in rats, transiently locks to ongoing hippocampal θ-rhythmic activity during the sensory-gathering epoch of a discrimination task. This result complements past studies on the locking of sniffing and the θ-rhythm as well as the relation of sniffing and whisking. An overarching possibility is that the preBötzinger inspiration oscillator, which paces whisking, can selectively lock with the θ-rhythm to traffic sensorimotor information between the rat’s neocortex and hippocampus.The hippocampus lies along the margins of the cortical mantle and has unique access to neuronal activity across all of the neocortex. From a functional perspective, the hippocampus forms the apex of neuronal processing in mammals and is a key element in the short-term working memory, where neuronal signals persist for tens of seconds, that is independent of the frontal cortex (reviewed in [1,2]). Sensory information from multiple modalities is highly transformed as it passes from primary and higher-order sensory areas to the hippocampus. Several anatomically defined regions that lie within the temporal lobe take part in this transformation, all of which involve circuits with extensive recurrent feedback connections (reviewed in [3]) (Fig 1). This circuit motif is reminiscent of the pattern of connectivity within models of associative neuronal networks, whose dynamics lead to the clustering of neuronal inputs to form a reduced set of abstract representations [4] (reviewed in [5]). The first way station in the temporal lobe contains the postrhinal and perirhinal cortices, followed by the medial and lateral entorhinal cortices. Of note, olfactory input—which, unlike other senses, has no spatial component to its representation—has direct input to the lateral entorhinal cortex [6]. The third structure is the hippocampus, which contains multiple substructures (Fig 1).Open in a separate windowFig 1Schematic view of the circuitry of the temporal lobe and its connections to other brain areas of relevance.Figure abstracted from published results [715]. Composite illustration by Julia Kuhl.The specific nature of signal transformation and neuronal computations within the hippocampus is largely an open issue that defines the agenda of a great many laboratories. Equally vexing is the nature of signal transformation as the output leaves the hippocampus and propagates back to regions in the neocortex (Fig 1)—including the medial prefrontal cortex, a site of sensory integration and decision-making—in order to influence perception and motor action. The current experimental data suggest that only some signals within the sensory stream propagate into and out of the hippocampus. What regulates communication with the hippocampus or, more generally, with structures within the temporal lobe? The results from studies in rats and mice suggest that the most parsimonious hypothesis, at least for rodents, involves the rhythmic nature of neuronal activity at the so-called θ-rhythm [16], a 5–10 Hz oscillation (reviewed in [17]). The origin of the rhythm is not readily localized to a single locus [10], but certainly involves input from the medial septum [17] (a member of the forebrain cholinergic system) as well as from the supramammillary nucleus [10,18] (a member of the hypothalamus). The medial septum projects broadly to targets in the hippocampus and entorhinal cortex (Fig 1) [10]. Many motor actions, such as the orofacial actions of sniffing, whisking, and licking, occur within the frequency range of the θ-rhythm [19,20]. Thus, sensory input that is modulated by rhythmic self-motion can, in principle, phase-lock with hippocampal activity at the θ-rhythm to ensure the coherent trafficking of information between the relevant neocortical regions and temporal lobe structures [2123].We now shift to the nature of orofacial sensory inputs, specifically whisking and sniffing, which are believed to dominate the world view of rodents [19]. Recent work identified a premotor nucleus in the ventral medulla, named the vibrissa region of the intermediate reticular zone, whose oscillatory output is necessary and sufficient to drive rhythmic whisking [24]. While whisking can occur independently of breathing, sniffing and whisking are synchronized in the curious and aroused animal [24,25], as the preBötzinger complex in the medulla [26]—the oscillator for inspiration—paces whisking at nominally 5–10 Hz through collateral projections [27]. Thus, for the purposes of reviewing evidence for the locking of orofacial sensory inputs to the hippocampal θ-rhythm, we confine our analysis to aroused animals that function with effectively a single sniff/whisk oscillator [28].What is the evidence for the locking of somatosensory signaling by the vibrissae to the hippocampal θ-rhythm? The first suggestion of phase locking between whisking and the θ-rhythm was based on a small sample size [29,30], which allowed for the possibility of spurious correlations. Phase locking was subsequently reexamined, using a relatively large dataset of 2 s whisking epochs, across many animals, as animals whisked in air [31]. The authors concluded that while whisking and the θ-rhythm share the same spectral band, their phases drift incoherently. Yet the possibility remained that phase locking could occur during special intervals, such as when a rat learns to discriminate an object with its vibrissae or when it performs a memory-based task. This set the stage for a further reexamination of this issue across different epochs in a rewarded task. Work from Diamond''s laboratory that is published in the current issue of PLOS Biology addresses just this point in a well-crafted experiment that involves rats trained to perform a discrimination task.Grion, Akrami, Zuo, Stella, and Diamond [32] trained rats to discriminate between two different textures with their vibrissae. The animals were rewarded if they turned to a water port on the side that was paired with a particular texture. Concurrent with this task, the investigators also recorded the local field potential in the hippocampus (from which they extracted the θ-rhythm), the position of the vibrissae (from which they extracted the evolution of phase in the whisk cycle), and the spiking of units in the vibrissa primary sensory cortex. Their first new finding is a substantial increase in the amplitude of the hippocampal field potential at the θ-rhythm frequency—approximately 10 Hz for the data of Fig 2A—during the two, approximately 0.5 s epochs when the animal approaches the textures and whisks against it. There is significant phase locking between whisking and the hippocampal θ-rhythm during both of these epochs (Fig 2B), as compared to a null hypothesis of whisking while the animal whisked in air outside the discrimination zone. Unfortunately, the coherence between whisking and the hippocampal θ-rhythm could not be ascertained during the decision, i.e., turn and reward epochs. Nonetheless, these data show that the coherence between whisking and the hippocampal θ-rhythm is closely aligned to epochs of active information gathering.Open in a separate windowFig 2Summary of findings on the θ-rhythm in a rat during a texture discrimination task, derived from reference [32]. (A) Spectrogram showing the change in spectral power of the local field potential in the hippocampal area CA1 before, during, and after a whisking-based discrimination task. (B) Summary index of the increase in coherence between the band-limited hippocampal θ-rhythm and whisking signals during approach of the rat to the stimulus and subsequent touch. The index reports sin(ϕHϕW)2+cos(ϕHϕW)2, where ɸH and ɸW are the instantaneous phase of the hippocampal and whisking signals, respectively, and averaging is over all trials and animals. (C) Summary indices of the increase in coherence between the band-limited hippocampal θ-rhythm and the spiking signal in the vibrissa primary sensory cortex (“barrel cortex”). The magnitude of the index for each neuron is plotted versus phase in the θ-rhythm. The arrows show the concentration of units around the mean phase—black arrows for the vector average across only neurons with significant phase locking (solid circles) and gray arrows for the vector average across all neurons (open and closed circles). The concurrent positions of the vibrissae are indicated. The vector average is statistically significant only for the approach (p < 0.0001) and touch (p = 0.04) epochs.The second finding by Grion, Akrami, Zuo, Stella, and Diamond [32] addresses the relationship between spiking activity in the vibrissa primary sensory cortex and the hippocampal θ-rhythm. The authors find that spiking is essentially independent of the θ-rhythm outside of the task (foraging in Fig 2C), similar to the result for whisking and the θ-rhythm (Fig 2B). They observe strong coherence between spiking and the θ-rhythm during the 0.5 s epoch when the animal approaches the textures (approach in Fig 2C), yet reduced (but still significant) coherence during the touch epoch (touch in Fig 2C). The latter result is somewhat surprising, given past work from a number of laboratories that observe spiking in the primary sensory cortex and whisking to be weakly yet significantly phase-locked during exploratory whisking [3337]. Perhaps overtraining leads to only a modest need for the transfer of sensory information to the hippocampus. Nonetheless, these data establish that phase locking of hippocampal and sensory cortical activity is essentially confined to the epoch of sensory gathering.Given the recent finding of a one-to-one locking of whisking and sniffing [24], we expect to find direct evidence for the phase locking of sniffing and the θ-rhythm. Early work indeed reported such phase locking [38] but, as in the case of whisking [29], this may have been a consequence of too small a sample and, thus, inadequate statistical power. However, Macrides, Eichenbaum, and Forbes [39] reexamined the relationship between sniffing and the hippocampal θ-rhythm before, during, and after animals sampled an odorant in a forced-choice task. They found evidence that the two rhythms phase-lock within approximately one second of the sampling epoch. We interpret this locking to be similar to that seen in the study by Diamond and colleagues (Fig 2B) [32]. All told, the combined data for sniffing and whisking by the aroused rodent, as accumulated across multiple laboratories, suggest that two oscillatory circuits—the supramammillary nucleus and medial septum complex that drives the hippocampal θ-rhythm and the preBötzinger complex that drives inspiration and paces the whisking oscillator during sniffing (Fig 1)—can phase-lock during epochs of gathering sensory information and likely sustain working memory.What anatomical pathway can lead to phase locking of these two oscillators? The electrophysiological study of Tsanov, Chah, Reilly, and O’Mara [9] supports a pathway from the medial septum, which is driven by the supramammillary nucleus, to dorsal pontine nuclei in the brainstem. The pontine nucleus projects to respiratory nuclei and, ultimately, the preBötzinger oscillator (Fig 1). This unidirectional pathway can, in principle, entrain breathing and whisking. Phase locking is not expected to occur during periods of basal breathing, when the breathing rate and θ-rhythm occur at highly incommensurate frequencies. However, it remains unclear why phase locking occurs only during a selected epoch of a discrimination task, whereas breathing and the θ-rhythm occupy the same frequency band during the epochs of approach, as well as touch-based target selection (Fig 2A). While a reafferent pathway provides the rat with information on self-motion of the vibrissae (Fig 1), it is currently unknown whether that information provides feedback for phase locking.A seeming requirement for effective communication between neocortical and hippocampal processing is that phase locking must be achieved at all possible phases of the θ-rhythm. Can multiple phase differences between sensory signals and the hippocampal θ-rhythm be accommodated? Two studies report that the θ-rhythm undergoes a systematic phase-shift along the dorsal–ventral axis of the hippocampus [40,41], although the full extent of this shift is only π radians [41]. In addition, past work shows that vibrissa input during whisking is represented among all phases of the sniff/whisk cycle, at levels from primary sensory neurons [42,43] through thalamus [44,45] and neocortex [3337], with a bias toward retraction from the protracted position. A similar spread in phase occurs for olfactory input, as observed at the levels of the olfactory bulb [46] and cortex [47]. Thus, in principle, the hippocampus can receive, transform, and output sensory signals that arise over all possible phases in the sniff/whisk cycle. In this regard, two signals that are exactly out-of-phase by π radians can phase-lock as readily as signals that are in-phase.What are the constraints for phase locking to occur within the observed texture identification epochs? For a linear system, the time to lock between an external input and hippocampal theta depends on the observed spread in the spectrum of the θ-rhythm. This is estimated as Δf ~3 Hz (half-width at half-maximum amplitude), implying a locking time on the order of 1/Δf ~0.3 s. This is consistent with the approximate one second of enhanced θ-rhythm activity observed in the study by Diamond and colleagues (Fig 2A) [32] and in prior work [39,48] during a forced-choice task with rodents.Does the θ-rhythm also play a role in the gating of output from the hippocampus to areas of the neocortex? Siapas, Lubenov, and Wilson [48] provided evidence that hippocampal θ-rhythm phase-locks to electrical activity in the medial prefrontal cortex, a site of sensory integration as well as decision-making. Subsequent work [4951] showed that the hippocampus drives the prefrontal cortex, consistent with the known unidirectional connectivity between Cornu Ammonis area 1 (CA1) of the hippocampus and the prefrontal cortex [11] (Fig 1). Further, phase locking of hippocampal and prefrontal cortical activity is largely confined to the epoch of decision-making, as opposed to the epoch of sensory gathering. Thus, over the course of approximately one second, sensory information flows into and then out of the hippocampus, gated by phase coherence between rhythmic neocortical and hippocampal neuronal activity.It is of interest that the medial prefrontal cortex receives input signals from sensory areas in the neocortex [52] as well as a transformed version of these input signals via the hippocampus (Fig 1). Yet it remains to be determined if this constitutes a viable hub for the comparison of the original and transformed signals. In particular, projections to the medial prefrontal cortex arise from the ventral hippocampus [2], while studies on the phase locking of hippocampal θ-rhythm to prefrontal neocortical activity were conducted in dorsal hippocampus, where the strength of the θ-rhythm is strong compared to the ventral end [53]. Therefore, similar recordings need to be performed in the ventral hippocampus. An intriguing possibility is that the continuous phase-shift of the θ-rhythm along the dorsal to the ventral axis of the hippocampus [40,41] provides a means to encode the arrival of novel inputs from multiple sensory modalities relative to a common clock.A final issue concerns the locking between sensory signals and hippocampal neuronal activity in species that do not exhibit a continuous θ-rhythm, with particular reference to bats [5456] and primates [5760]. One possibility is that only the up and down swings of neuronal activity about a mean are important, as opposed to the rhythm per se. In fact, for animals in which orofacial input plays a relatively minor role compared to rodents, such a scheme of clocked yet arrhythmic input may be a necessity. In this case, the window of processing is set by a stochastic interval between transitions, as opposed to the periodicity of the θ-rhythm. This may imply that up/down swings of neuronal activity may drive hippocampal–neocortical communications in all species, with communication mediated via phase-locked oscillators in rodents and via synchronous fluctuations in bats and primates. The validity of this scheme and its potential consequence on neuronal computation remains an open issue and a focus of ongoing research.  相似文献   

5.
Coral reefs on remote islands and atolls are less exposed to direct human stressors but are becoming increasingly vulnerable because of their development for geopolitical and military purposes. Here we document dredging and filling activities by countries in the South China Sea, where building new islands and channels on atolls is leading to considerable losses of, and perhaps irreversible damages to, unique coral reef ecosystems. Preventing similar damage across other reefs in the region necessitates the urgent development of cooperative management of disputed territories in the South China Sea. We suggest using the Antarctic Treaty as a positive precedent for such international cooperation.Coral reefs constitute one of the most diverse, socioeconomically important, and threatened ecosystems in the world [13]. Coral reefs harbor thousands of species [4] and provide food and livelihoods for millions of people while safeguarding coastal populations from extreme weather disturbances [2,3]. Unfortunately, the world’s coral reefs are rapidly degrading [13], with ~19% of the total coral reef area effectively lost [3] and 60% to 75% under direct human pressures [3,5,6]. Climate change aside, this decline has been attributed to threats emerging from widespread human expansion in coastal areas, which has facilitated exploitation of local resources, assisted colonization by invasive species, and led to the loss and degradation of habitats directly and indirectly through fishing and runoff from agriculture and sewage systems [13,57]. In efforts to protect the world’s coral reefs, remote islands and atolls are often seen as reefs of “hope,” as their isolation and uninhabitability provide de facto protection against direct human stressors, and may help impacted reefs through replenishment [5,6]. Such isolated reefs may, however, still be vulnerable because of their geopolitical and military importance (e.g., allowing expansion of exclusive economic zones and providing strategic bases for military operations). Here we document patterns of reclamation (here defined as creating new land by filling submerged areas) of atolls in the South China Sea, which have resulted in considerable loss of coral reefs. We show that conditions are ripe for reclamation of more atolls, highlighting the need for international cooperation in the protection of these atolls before more unique and ecologically important biological assets are damaged, potentially irreversibly so.Studies of past reclamations and reef dredging activities have shown that these operations are highly deleterious to coral reefs [8,9]. First, reef dredging affects large parts of the surrounding reef, not just the dredged areas themselves. For example, 440 ha of reef was completely destroyed by dredging on Johnston Island (United States) in the 1960s, but over 2,800 ha of nearby reefs were also affected [10]. Similarly, at Hay Point (Australia) in 2006 there was a loss of coral cover up to 6 km away from dredging operations [11]. Second, recovery from the direct and indirect effects of dredging is slow at best and nonexistent at worst. In 1939, 29% of the reefs in Kaneohe Bay (United States) were removed by dredging, and none of the patch reefs that were dredged had completely recovered 30 years later [12]. In Castle Harbour (Bermuda), reclamation to build an airfield in the early 1940s led to limited coral recolonization and large quantities of resuspended sediments even 32 years after reclamation [13]; several fish species are claimed extinct as a result of this dredging [14,15]. Such examples and others led Hatcher et al. [8] to conclude that dredging and land clearing, as well as the associated sedimentation, are possibly the most permanent of anthropogenic impacts on coral reefs.The impacts of dredging for the Spratly Islands are of particular concern because the geographical position of these atolls favors connectivity via stepping stones for reefs over the region [1619] and because their high biodiversity works as insurance for many species. In an extensive review of the sparse and limited data available for the region, Hughes et al. [20] showed that reefs on offshore atolls in the South China Sea were overall in better condition than near-shore reefs. For instance, by 2004 they reported average coral covers of 64% for the Spratly Islands and 68% for the Paracel Islands. By comparison, coral reefs across the Indo-Pacific region in 2004 had average coral covers below 25% [21]. Reefs on isolated atolls can still be prone to extensive bleaching and mortality due to global climate change [22] and, in the particular case of atolls in the South China Sea, the use of explosives and cyanine [20]. However, the potential for recovery of isolated reefs to such stressors is remarkable. Hughes et al. [20] documented, for instance, how coral cover in several offshore reefs in the region declined from above 80% in the early 1990s to below 6% by 1998 to 2001 (due to a mixture of El Niño and damaging fishing methods that make use of cyanine and explosives) but then recovered to 30% on most reefs and up to 78% in some reefs by 2004–2008. Another important attribute of atolls in the South China Sea is the great diversity of species. Over 6,500 marine species are recorded for these atolls [23], including some 571 reef coral species [24] (more than half of the world’s known species of reef-building corals). The relatively better health and high diversity of coral reefs in atolls over the South China Sea highlights the uniqueness of such reefs and the important roles they may play for reefs throughout the entire region. Furthermore, these atolls are safe harbor for some of the last viable populations of highly threatened species (e.g., Bumphead Parrotfish [Bolbometopon muricatum] and several species of sawfishes [Pristis, Anoxypristis]), highlighting how dredging in the South China Sea may threaten not only species with extinction but also the commitment by countries in the region to biodiversity conservation goals such as the Convention of Biological Diversity Aichi Targets and the United Nations Sustainable Development Goals.Recently available remote sensing data (i.e., Landsat 8 Operational Land Imager and Thermal Infrared Sensors Terrain Corrected images) allow quantification of the sharp contrast between the gain of land and the loss of coral reefs resulting from reclamation in the Spratly Islands (Fig 1). For seven atolls recently reclaimed by China in the Spratly Islands (names provided in Fig 1D, S1 Data for details); the area of reclamation is the size of visible areas in Landsat band 6, as prior to reclamation most of the atolls were submerged, with the exception of small areas occupied by a handful of buildings on piers (note that the amount of land area was near zero at the start of the reclamation; Fig 1C, S1 Data). The seven reclaimed atolls have effectively lost ~11.6 km2 (26.9%) of their reef area for a gain of ~10.7 km2 of land (i.e., >75 times increase in land area) from February 2014 to May 2015 (Fig 1C). The area of land gained was smaller than the area of reef lost because reefs were lost not only through land reclamation but also through the deepening of reef lagoons to allow boat access (Fig 1B). Similar quantification of reclamation by other countries in the South China Sea (Fig 1Reclamation leads to gains of land in return for losses of coral reefs: A case example of China’s recent reclamation in the Spratly Islands.Table 1List of reclaimed atolls in the Spratly Islands and the Paracel Islands.The impacts of reclamation on coral reefs are likely more severe than simple changes in area, as reclamation is being achieved by means of suction dredging (i.e., cutting and sucking materials from the seafloor and pumping them over land). With this method, reefs are ecologically degraded and denuded of their structural complexity. Dredging and pumping also disturbs the seafloor and can cause runoff from reclaimed land, which generates large clouds of suspended sediment [11] that can lead to coral mortality by overwhelming the corals’ capacity to remove sediments and leave corals susceptible to lesions and diseases [7,9,25]. The highly abrasive coralline sands in flowing water can scour away living tissue on a myriad of species and bury many organisms beyond their recovery limits [26]. Such sedimentation also prevents new coral larvae from settling in and around the dredged areas, which is one of the main reasons why dredged areas show no signs of recovery even decades after the initial dredging operations [9,12,13]. Furthermore, degradation of wave-breaking reef crests, which make reclamation in these areas feasible, will result in a further reduction of coral reefs’ ability to (1) self-repair and protect against wave abrasion [27,28] (especially in a region characterized by typhoons) and (2) keep up with rising sea levels over the next several decades [29]. This suggests that the new islands would require periodic dredging and filling, that these reefs may face chronic distress and long-term ecological damage, and that reclamation may prove economically expensive and impractical.The potential for land reclamation on other atolls in the Spratly Islands is high, which necessitates the urgent development of cooperative management of disputed territories in the South China Sea. First, the Spratly Islands are rich in atolls with similar characteristics to those already reclaimed (Fig 1D); second, there are calls for rapid development of disputed territories to gain access to resources and increase sovereignty and military strength [30]; and third, all countries with claims in the Spratly Islands have performed reclamation in this archipelago (20]. One such possibility is the generation of a multinational marine protected area [16,17]. Such a marine protected area could safeguard an area of high biodiversity and importance to genetic connectivity in the Pacific, in addition to promoting peace in the region (extended justification provided by McManus [16,17]). A positive precedent for the creation of this protected area is that of Antarctica, which was also subject to numerous overlapping claims and where a recently renewed treaty froze national claims, preventing large-scale ecological damage while providing environmental protection and areas for scientific study. Development of such a legal framework for the management of the Spratly Islands could prevent conflict, promote functional ecosystems, and potentially result in larger gains (through spillover, e.g. [31]) for all countries involved.  相似文献   

6.
The diversification of prokaryotes is accelerated by their ability to acquire DNA from other genomes. However, the underlying processes also facilitate genome infection by costly mobile genetic elements. The discovery that cells can uptake DNA by natural transformation was instrumental to the birth of molecular biology nearly a century ago. Surprisingly, a new study shows that this mechanism could efficiently cure the genome of mobile elements acquired through previous sexual exchanges.Horizontal gene transfer (HGT) is a key contributor to the genetic diversification of prokaryotes [1]. Its frequency in natural populations is very high, leading to species’ gene repertoires with relatively few ubiquitous (core) genes and many low-frequency genes (present in a small proportion of individuals). The latter are responsible for much of the phenotypic diversity observed in prokaryotic species and are often encoded in mobile genetic elements that spread between individual genomes as costly molecular parasites. Hence, HGT of interesting traits is often carried by expensive vehicles.The net fitness gain of horizontal gene transfer depends on the genetic background of the new host, the acquired traits, the fitness cost of the mobile element, and the ecological context [2]. A study published in this issue of PLOS Biology [3] proposes that a mechanism originally thought to favor the acquisition of novel DNA—natural transformation—might actually allow prokaryotes to clean their genome of mobile genetic elements.Natural transformation allows the uptake of environmental DNA into the cell (Fig 1). It differs markedly from the other major mechanisms of HGT by depending exclusively on the recipient cell, which controls the expression of the transformation machinery and favors exchanges with closely related taxa [4]. DNA arrives at the cytoplasm in the form of small single-stranded fragments. If it is not degraded, it may integrate the genome by homologous recombination at regions of high sequence similarity (Fig 1). This results in allelic exchange between a fraction of the chromosome and the foreign DNA. Depending on the recombination mechanisms operating in the cell and on the extent of sequence similarity between the transforming DNA and the genome, alternative recombination processes may take place. Nonhomologous DNA flanked by regions of high similarity can be integrated by double homologous recombination at the edges (Fig 1E). Mechanisms mixing homologous and illegitimate recombination require less strict sequence similarity and may also integrate nonhomologous DNA in the genome [5]. Some of these processes lead to small deletions of chromosomal DNA [6]. These alternative recombination pathways allow the bacterium to lose and/or acquire novel genetic information.Open in a separate windowFig 1Natural transformation and its outcomes.The mechanism of environmental DNA uptake brings into the cytoplasm small single-stranded DNA fragments (A). Earlier models for the raison d’être of natural transformation have focused on the role of DNA as a nutrient (B), as a breaker of genetic linkage (C), or as a substrate for DNA repair (D). The chromosomal curing model allows the removal of mobile elements by recombination between conserved sequences at their extremities (E). The model is strongly affected by the size of the incoming DNA fragments, since the probability of uptake of a mobile element rapidly decreases with the size of the element and of the incoming fragments (F). This leads to a bias towards the deletion of mobile elements by recombination, especially the largest ones. In spite of this asymmetry, some mobile elements can integrate the genome via natural transformation, following homologous recombination between large regions of high sequence similarity (G) or homology-facilitated illegitimate recombination in short regions of sequence similarity (H).Natural transformation was the first described mechanism of HGT. Its discovery, in the first half of the 20th century, was instrumental in demonstrating that DNA is the support of genetic information. This mechanism is also regularly used to genetically engineer bacteria. Researchers have thus been tantalized by the lack of any sort of consensus regarding the raison d’être of natural transformation.Croucher, Fraser, and colleagues propose that the small size of recombining DNA fragments arising from transformation biases the outcome of recombination towards the deletion of chromosomal genetic material (Fig 1F). Incoming DNA carrying the core genes that flank a mobile element, but missing the element itself, can provide small DNA fragments that become templates to delete the element from the recipient genome (Fig 1E). The inverse scenario, incoming DNA carrying the core genes and a mobile element absent from the genome, is unlikely due to the mobile element being large and the recombining transformation fragments being small. Importantly, this mechanism most efficiently removes the loci at low frequency in the population because incoming DNA is more likely to lack such intervening sequences when these are rare. Invading mobile genetic elements are initially at low frequencies in populations and will be frequently deleted by this mechanism. Hence, recombination will be strongly biased towards the deletion or inactivation of large mobile elements such as phages, integrative conjugative elements, and pathogenicity islands. Simulations at a population scale show that transformation could even counteract the horizontal spread of mobile elements.An obvious limit of natural transformation is that it can''t cope with mobile genetic elements that rapidly take control of the cell, such as virulent phages, or remain extra-chromosomal, such as plasmids. Another limit of transformation is that it facilitates the acquisition of costly mobile genetic elements [7,8], especially if these are small. When these elements replicate in the genome, as is the case of transposable elements, they may become difficult to remove by subsequent events of transformation. Further work will be needed to quantify the costs associated with such infections.Low-frequency adaptive genes might be deleted through transformation in the way proposed for mobile genetic elements. However, adaptive genes rise rapidly to high frequency in populations, becoming too frequent to be affected by transformation. Interestingly, genetic control of transformation might favor the removal of mobile elements incurring fitness costs while preserving those carrying adaptive traits [3]. Transformation could, thus, effectively cure chromosomes and other replicons of deleterious mobile genetic elements integrated in previous events of horizontal gene transfer while preserving recently acquired genes of adaptive value.Prokaryotes encode an arsenal of immune systems to prevent infection by mobile elements and several regulatory systems to repress their expression [9]. Under the new model (henceforth named the chromosomal curing model), transformation has a key, novel position in this arsenal because it allows the expression of the incoming DNA while subsequently removing deleterious elements from the genome.Mobile elements encode their own tools to evade the host immune systems [9]. Accordingly, they search to affect natural transformation [3]. Some mobile genetic elements integrate at, and thus inactivate, genes encoding the machineries required for DNA uptake or recombination. Other elements express nucleases that degrade exogenous DNA (precluding its uptake). These observations suggest an arms race evolutionary dynamics between the host, which uses natural transformation to cure its genome, and mobile genetic elements, which target these functions for their own protection. This gives further credibility to the hypothesis that transformation is a key player in the intra-genomic conflicts between prokaryotes and their mobile elements.Previous studies have proposed alternative explanations for the evolution of natural transformation, including the possibility that it was caused by selection for allelic recombination and horizontal gene transfer [10], for nutrient acquisition [11], or for DNA repair [12]. The latter hypothesis has recently enjoyed regained interest following observations that DNA-damage agents induce transformation [13,14], along with intriguing suggestions that competence might be advantageous even in the absence of DNA uptake [15,16]. The hypothesis that transformation evolved to acquire nutrients has received less support in recent years.Two key specific traits of transformation—host genetic control of the process and selection for conspecific DNA—share some resemblance with recombination processes occurring during sexual reproduction. Yet, the analogy between the two processes must be handled with care because transformation results, at best, in gene conversion of relatively small DNA fragments from another individual. The effect of sexual reproduction on genetic linkage is thought to be advantageous in the presence of genetic drift or weak and negative or fluctuating epistasis [17]. Interestingly, these conditions could frequently be met by bacterial pathogens [18], which might explain why there are so many naturally transformable bacteria among human pathogens, such as Streptococcus pneumoniae, Helicobacter pylori, Staphylococcus aureus, Haemophilus influenzae, or Neisseria spp. The most frequent criticism to the analogy between transformation and sexual reproduction is that environmental DNA from dead individuals is unlikely to carry better alleles than the living recipient [11]. This difficulty is circumvented in bacteria that actively export copies of their DNA to the extracellular environment. Furthermore, recent theoretical studies showed that competence could be adaptive even when the DNA originates from individuals with lower fitness alleles [19,20]. Mathematically speaking, sexual exchanges with the dead might be better than no exchanges at all.The evaluation of the relative merits of the different models aiming to explain the raison d’être of natural transformation is complicated because they share several predictions. For example, the induction of competence under maladapted environments can be explained by the need for DNA repair (more DNA damage in these conditions), by selection for adaptation (through recombination or HGT), and by the chromosomal curing model because mobile elements are more active under such conditions (leading to more intense selection for their inactivation). Some of the predictions of the latter model—the rapid diversification and loss of mobile elements and their targeting of the competence machinery—can also be explained by models involving competition between mobile elements and their antagonistic association with the host. One of the great uses of mathematical models in biology resides in their ability to pinpoint the range of parameters and conditions within which each model can apply. The chromosomal curing model remains valid under broad ranges of variation of many of its key variables. This might not be the case for alternative models [3].While further theoretical work will certainly help to specify the distinctive predictions of each model, realistic experimental evolutionary studies will be required to test them. Unfortunately, the few pioneering studies on this topic have given somewhat contradictory conclusions. Some showed that natural transformation was beneficial to bacteria adapting under suboptimal environments (e.g., in times of starvation or in stressful environments) [21,22], whereas others showed it was most beneficial under exponential growth and early stationary phase [23]. Finally, at least one study showed a negative effect of transformation on adaptation [24]. Part of these discrepancies might reveal differences between species, which express transformation under different conditions. They might also result from the low intraspecies genetic diversity in these experiments, in which case the use of more representative communities might clarify the conditions favoring transformation.Macroevolutionary studies on natural transformation are hindered by the small number of prokaryotes known to be naturally transformable (82 species, following [25]). In itself, this poses a challenge: if transformation is adaptive, then why does it seem to be so rare? The benefits associated with deletion of mobile elements, with functional innovation, or with DNA repair seem sufficiently general to affect many bacterial species. The trade-offs between cost and benefit of transformation might lead to its selection only when mobile elements are particularly deleterious for a given species or when species face particular adaptive challenges. According to the chromosomal curing model, selection for transformation would be stronger in highly structured environments or when recombination fragments are small. There is also some evidence that we have failed to identify numerous naturally transformable prokaryotes, in which case the question above may lose part of its relevance. Many genomes encode key components of the transformation machinery, suggesting that this process might be more widespread than currently acknowledged [25]. As an illustration, the ultimate model for research in microbiology—Escherichia coli—has only recently been shown to be naturally transformable; the conditions leading to the expression of this trait remain unknown [26].The chromosomal curing model might contribute to explaining other mechanisms shaping the evolution of prokaryotic genomes beyond the removal of mobile elements. Transformation-mediated deletion of genetic material, especially by homology-facilitated illegitimate recombination (Fig 1H), could remove genes involved in the mobility of the genetic elements, facilitating the co-option by the host of functions encoded by mobile genetic elements. Several recent studies have pinpointed the importance of such domestication processes in functional innovation and bacterial warfare [27]. The model might also be applicable to other mechanisms that transfer small DNA fragments between cells. These processes include gene transfer agents [28], extracellular vesicles [29], and possibly nanotubes [30]. The chromosomal curing model might help unravel their ecological and evolutionary impact.  相似文献   

7.
In the last 15 years, antiretroviral therapy (ART) has been the most globally impactful life-saving development of medical research. Antiretrovirals (ARVs) are used with great success for both the treatment and prevention of HIV infection. Despite these remarkable advances, this epidemic grows relentlessly worldwide. Over 2.1 million new infections occur each year, two-thirds in women and 240,000 in children. The widespread elimination of HIV will require the development of new, more potent prevention tools. Such efforts are imperative on a global scale. However, it must also be recognised that true containment of the epidemic requires the development and widespread implementation of a scientific advancement that has eluded us to date—a highly effective vaccine. Striving for such medical advances is what is required to achieve the end of AIDS.In the last 15 years, antiretroviral therapy (ART) has been the most globally impactful life-saving development of medical research. Antiretrovirals (ARVs) are used with great success for both the treatment and prevention of HIV infection. In the United States, the widespread implementation of combination ARVs led to the virtual eradication of mother-to-child transmission of HIV from 1,650 cases in 1991 to 110 cases in 2011, and a turnaround in AIDS deaths from an almost 100% five-year mortality rate to a five-year survival rate of 91% in HIV-infected adults [1]. Currently, the estimated average lifespan of an HIV-infected adult in the developed world is well over 40 years post-diagnosis. Survival rates in the developing world, although lower, are improving: in sub-Saharan Africa, AIDS deaths fell by 39% between 2005 and 2013, and the biggest decline, 51%, was seen in South Africa [2].Furthermore, the association between ART, viremia, and transmission has led to the concept of “test and treat,” with the hope of reducing community viral load by testing early and initiating treatment as soon as a diagnosis of HIV is made [3]. Indeed, selected regions of the world have begun to actualize the public health value of ARVs, from gains in life expectancy to impact on onward transmission, with a potential 1% decline in new infections for every 10% increase in treatment coverage [2]. In September 2015, WHO released new guidelines removing all limitations on eligibility for ART among people living with HIV and recommending pre-exposure prophylaxis (PrEP) to population groups at significant HIV risk, paving the way for a global onslaught on HIV [4].Despite these remarkable advances, this epidemic grows relentlessly worldwide. Over 2.1 million new infections occur each year, two-thirds in women and 240,000 in children [2]. In heavily affected countries, HIV infection rates have only stabilized at best: the annualized acquisition rates in persons in their first decade of sexual activity average 3%–5% yearly in southern Africa [57]. These figures are hardly compatible with the international health community’s stated goal of an “AIDS-free generation” [8,9]. In highly resourced settings, microepidemics of HIV still occur, particularly among gays, bisexuals, and men who have sex with men (MSM) [10]. HIV epidemics are expanding in two geographic regions in 2015—the Middle East/North Africa and Eastern Europe/Central Asia—largely due to challenges in implementing evidence-based HIV policies and programmes [2]. Even for the past decade in the US, almost 50,000 new cases recorded annually, two-thirds among MSM, has been a stable figure for years and shows no evidence of declining [1].While treatment scale-up, medical male circumcision [11], and the implementation of strategies to prevent mother-to-child transmission [12] have received global traction, systemic or topical ARV-based biomedical advances to prevent sexual acquisition of HIV have, as yet, made limited impressions on a population basis, despite their reported efficacy. Factors such as their adherence requirements, cost, potential for drug resistance, and long-term feasibility have restricted the appetite for implementation, even though these approaches may reduce HIV incidence in select populations.Already, several trials have shown that daily oral administration of the ARV tenofovir disoproxil fumarate (TDF), taken singly or in combination with emtricitabine, as PrEP by HIV-uninfected individuals, reduces HIV acquisition among serodiscordant couples (where one partner is HIV-positive and the other is HIV-negative) [13], MSM [14], at-risk men and women [15], and people who inject drugs [16,17] by between 44% and 75%. Long-acting injectable antiretroviral agents such as rilpivirine and cabotegravir, administered every two and three months, respectively, are also being developed for PrEP. All of these PrEP approaches are dependent on repeated HIV testing and adherence to drug regimens, which may challenge effectiveness in some populations and contexts.The widespread elimination of HIV will require the development of new, more potent prevention tools. Because HIV acquisition occurs subclinically, the elimination of HIV on a population basis will require a highly effective vaccine. Alternatively, if vaccine development is delayed, supplementary strategies may include long-acting pre-exposure antiretroviral cocktails and/or the administration of neutralizing antibodies through long-lasting parenteral preparations or the development of a “genetic immunization” delivery system, as well as scaling up delivery of highly effective regimens to eliminate mother-to-child HIV transmission (Fig 1).Open in a separate windowFig 1Medical interventions required to end the epidemic of HIV.Image credit: Glenda Gray.  相似文献   

8.
Life on earth is enormously diverse, in part because each individual engages in countless interactions with its biotic and abiotic environment during its lifetime. Not only are there many such interactions, but any given interaction of each individual with, say, its neighbor or a nutrient could lead to a different effect on its fitness and on the dynamics of the population of which it is a member. Predicting those effects is an enduring challenge to the field of ecology. Using a simple laboratory system, Hoek and colleagues present evidence that resource availability can be a primary driver of variation between interactions. Their results suggest that a complex continuum of interaction outcomes can result from the simple combined effects of nutrient availability and density-dependent population dynamics. The future is rich with potential to integrate tractable experimental systems like theirs with hypotheses derived from studies of interactions in natural communities.The science of ecology is plagued or elevated (depending on your perspective) by the tendency for interactions between organisms and their environment to vary in space and time, with differing consequences for behavior, physiology, and/or fitness. This variation, known as context dependence, affects biotic and abiotic interactions alike and frustrates predictive efforts. For example, to determine how herbivory affects a plant population, you have to predict not only (a) how tissue loss will affect plant fitness but also (b) how much tissue will be lost (which depends on the density and identity of herbivores) and (c) how difficult that tissue will be to replace (which depends on resource availability). Herbivore communities and resource availability thus provide the “context” for plant tissue loss, such that herbivory can matter "hardly at all" or "a whole lot" depending on where you are and when you look.Mutualisms—i.e., interactions with reciprocal fitness benefits (Box 1)—were early poster children for context dependence [1], in part because it seemed difficult to reconcile cooperative behavior with the selective pressure to minimize interaction costs [2]. Such selection can destabilize mutualism by favoring the evolution of exploiters (or "cheaters"), whose effects on their interacting partners are dampened or even reversed (i.e., resulting in parasitism; Box 1). Compounding this paradox still further, some mutualisms occur within a trophic level, where substantial niche overlap between partners also renders them potential competitors. Recent work on microbes nevertheless suggests that mutualism readily evolves between partners at the same trophic level under certain environmental conditions [3]. Work on positive interactions between plants had in fact previously suggested a general "stress gradient hypothesis" for predicting these context-dependent outcomes: interactions should transition from negative to positive along gradients of increasing environmental stress [4]. Although derived from plant community ecology, the stress gradient hypothesis has recently gained traction in diverse mutualisms [58]. So far, this concordance is more about pattern than process; elucidating process is, however, ultimately essential to understanding context dependence.

Box 1. The Interaction Compass

Interactions are usually defined by the direction in which they affect the interactors, be they species, strains, or individuals. Even as variation in interspecific interactions first came into focus [1,9], it was clear that both the strength and the sign of interactions shifted back and forth along a continuum (Fig 1). The center of the interaction compass (see [10,11]) is sometimes called neutralism, but this box classifies any interaction where a fitness effect does not occur. Although the interaction compass is typically shown with only two species for the purposes of illustration, all species are involved in networks of interactions, and indirect interactions—defined where one species affects another by way of a third species or pathway—are ubiquitous in ecological communities and can rival direct interactions in their strength [e.g., 12]. The variety of terms and their distinct historical origins can lead to some ambiguity, as is the case with mutualism and facilitation [13]. Facilitation does not appear in the interaction compass (Fig 1), but it is associated with some of the earliest research on positive interactions across environmental gradients and with the stress gradient hypothesis in particular [4]. The term arises from 20th-century plant community ecology and refers either to any interaction where one species modifies the environment in a way that is positive for a neighboring species or specifically to positive interactions within a trophic level. Relevant here, until the recent surge of interest in microbe-microbe interactions, the term mutualism typically referred to interactions between trophic levels, where the competition outcome (––) is unlikely because the interactors do not overlap substantially in their niche requirements. It is common to speak instead of the mutualism-parasitism continuum. Although microbes fit perhaps only uncomfortably into the trophic boxes defined on the basis of macroorganism interactions, most cross-feeding mutualisms occur within a trophic level and thus could be thought of as examples of both mutualism and facilitation, with outcomes ranging around the full compass, from mutualism to competition and back to mutualism again.Open in a separate windowFig 1The interaction compass.A two-species interaction is illustrated with the terms defining each of the differently signed outcomes; the signs indicate individual fitness or population growth rate. A positive (+) sign thus indicates a positive effect of the interaction on the individual or population, a zero (0) sign indicates no effect, and a negative (–) sign indicates a negative effect. Moving away from the center increases the magnitude of the net effect of the interaction.One relatively straightforward path by which an increase in stress can lead to stronger mutualism is when the interaction involves a direct exchange of the environmentally limiting resource. In North American grasslands, grasses associate with arbuscular mycorrhizal fungi, which exchange soil nutrients for carbon fixed by the grass. The fungi can deliver both phosphorus and nitrogen, increasing grass uptake of whichever nutrient is least available in a given soil [5]. Although this seems like a good trick, we cannot characterize the outcome of the grass''s interaction with the fungi on the basis of nutrient uptake (the benefit) alone. The delivery of carbon by the grass to the fungi (the cost), and the net balance of trade (benefit−cost), is key. In this example, the grass receives a net benefit (increased biomass) from interacting with the fungi in phosphorus-poor soil but not in nitrogen-poor soil [5]. The fungi thus seem to be parasites in nitrogen-poor soil, but, interestingly, even that interaction is less negative for grasses in soils with less nitrogen [5]. This example suggests potentially broad relevance for the stress gradient hypothesis across the continuum of interaction types (Box 1) (Fig 1) but also highlights how the balance of trade determines ecological outcomes. To predict the outcome of any given interaction, therefore, we need to understand how both the benefits and the costs of interactions depend on an organism''s environment. This is a challenging task, requiring integrative understanding of organismal physiology, axes of environmental variation, and the nature of biotic interactions, as well as of the feedbacks between organismal ecology and evolution.The diversity and experimental tractability of microorganisms, as well as their fundamental role in life on earth, make them appealing systems for studying context dependence in its multiple dimensions. The potential for mutualism among microbes and between microbes and their multicellular hosts is receiving unprecedented attention as the diverse and important roles of the human gut microbiome come into sharp relief. As field-based studies of macroorganism interactions move past the recognition of context dependence to a deliberate focus on its drivers and mechanisms [14], laboratory-based studies of microbes are, in parallel, moving past debates about the "typical" nature of microbial interactions [15,16] to focus on how and why interaction outcomes vary across environmental gradients [3,17].In this issue of PLOS Biology, Hoek and colleagues show that interactions between two cross-feeding yeast strains can transition across nearly a full continuum of outcomes with simple variation in environmental nutrient concentration [18]. Cross-feeding microbes are those with similar metabolic requirements whose metabolic pathways are complementary, either because of a "leaky" byproduct system whereby some metabolites end up in the environment [15] or because of costly, cooperative exchange [19]. In the Hoek et al. study, the investigators used strains of cross-feeding yeast engineered to differ in amino acid production: one strain lacks leucine production but overproduces tryptophan (Leu), and the other lacks tryptophan production but overproduces leucine (Trp). By varying the quantity of leucine and tryptophan in the environment in a constant ratio, the investigators produced a continuum of interaction outcomes, from low-amino-acid environments that exhibit obligate mutualism to high-amino-acid environments that exhibit strong competition. They go on to show that many of these dynamics can be recovered with a remarkably simple model of each strain''s population growth, primarily depending only on the quantity of environmental amino acids and the population densities of the two strains. The complete range of empirically determined qualitative change in the interaction is mirrored by this simple model, which suggests that the outcomes of interactions that depend on resource exchange (including most mutualisms! [20]) can be predicted to an impressive degree by measuring the availability of that resource, the population densities of the interacting species, and their intrinsic growth rates.Returning to our grass-fungi interaction from above, however, we recall that the benefits of interacting (receiving the missing amino acid in the case of these yeasts) are only one side of the coin. The costs of interacting are what underlie the conflicts of interest that threaten mutualism stability and can lead to increasingly negative interactions over evolutionary time. In the Hoek et al. study, the costs of overproducing the amino acid that is consumed by the other strain are modeled only implicitly in the intrinsic growth rate (r), which is determined by growing each strain in monoculture with unlimited amino acids. The major discrepancy between their model and the empirical data, however, is that the model incorrectly predicts a much larger range of amino-acid concentrations at which the Trp strain is expected to outperform the Leu strain in both monoculture and co-culture. Interestingly, the Trp strain performs particularly poorly when it is co-cultured with the Leu strain, except at very high levels of environmental amino-acid availability. It is tempting to speculate that this discrepancy is caused by the model''s lack of an explicit density-dependent cost of leucine production, which would be exacerbated when the Leu strain is performing well. Going forward, it should be possible to merge population dynamic models that include such a cost [21] with an explicit term for resource availability to see how well these models predict dynamics in a variety of empirical systems.Rapid, ongoing global change presents one compelling reason to determine how resource availability underpins interactions, and the Hoek et al. study also sheds some needed light here. Using their model, they determine distinct early-warning "signatures" of imminent population collapse for co-cultures at very low levels of amino acids (think, e.g., drought), in which both strains go extinct upon the collapse of the obligate mutualism, and at very high levels of amino acids (think, e.g., nutrient pollution), in which competitive exclusion leads to the extinction of the slower growing strain. The collapse of populations engaging in obligate mutualism is predicted when the ratio of the population densities of the two strains becomes stable much more quickly than the total population size (particularly at small population sizes) and vice versa for competitive exclusion. In contrast, in healthy populations, the ratio of the strains and the total population size become stable at approximately equal speeds. As the authors note, this result suggests that we can predict how close one or both interacting species are to extinction by monitoring their comparative population dynamics.This is an exciting prospect, but how easy is it to monitor the population dynamics of interacting species outside the laboratory? Monitoring populations of long-lived species in the field is inherently difficult, and, historically, less attention has been paid to determining how the environment affects populations of interacting species than to how it affects individual traits and fitness [14]. But wait, you say, surely individual fitness is the driving force behind population dynamics. Well, yes and no. To assess individual fitness, investigators almost always use one or more proxies, including growth, survival, and reproductive biomass. Population-level studies have shown that a given interacting species can have multiple, frequently opposing effects on these different components of their partner''s fitness, such that an exclusive focus on any one component can be very misleading [22,23]. In addition, population dynamics depend on the probability of successful offspring recruitment; this probability is critical because it itself is also likely to vary along environmental gradients [14]. Population-level approaches thus deserve explicit focus despite their challenges, and this is one area where field studies can be greatly enhanced by both theory and model systems.A more serious problem, perhaps, and one that confronts all manner of approaches and systems, is how to scale up from simplified studies of two interacting species to entire ecological communities [24,25]. To use distinct dynamical signatures to predict population collapse, we need to know what other threats a species faces as well as what other opportunities are available. For example, corals exchange nutrients and protection for fixed carbon from their photosynthetic algal symbionts in the genus Symbiodinium. There are at least four major clades within Symbiodinium that associate with corals, and any given coral can associate with more than one type of algae, either simultaneously or throughout its lifetime. Temperature is an important environmental stress for corals, leading to the well known and increasingly problematic coral bleaching phenomenon, in which corals lose the carbon source provided by their algal symbionts (through loss of the symbionts themselves or of the symbionts'' photosynthetic capacity) as the oceans warm. However, some algal symbionts are more thermally tolerant than others, such that corals under temperature stress may lose their association with one symbiont (i.e., collapse of that obligate mutualism) only to gain assocation with a second, more thermally tolerant symbiont (i.e., formation of a different obligate mutualism) [26]. Various alternative responses to environmental stress can be imagined by considering interactions in a community context (Fig 2), and our understanding of these scenarios in well studied field systems should be mined to generate hypotheses about lesser known microbial interaction networks.Open in a separate windowFig 2Mutualism in a community context.Multispecies interactions can exhibit a greater variety of outcomes than two-way interactions. (A) An interaction in which the mutualism is always obligate (individuals with no mutualist have zero fitness), but some mutualists are better than others at high environmental stress (e.g., as in the coral-algae example). (B) An interaction in which there are multiple mutualists that vary in their cost to the partner. The costly mutualist is more effective, so Mutualist A increases fitness more than Mutualist B when stress is low to intermediate, but the cost of Mutualist A exceeds its benefit when stress is intermediate to high. In this mutualism, unlike in (A), the partner can exist independently of its mutualists and actually does so with higher fitness when environmental stress is very low (resource availability is very high) or stress is very high (resource availability is very low).The challenges to elucidating the drivers and mechanisms of context dependence are real, but the work of Hoek and colleagues reminds us that complex outcomes do not necessarily require complicated explanations. The emerging parallels between the burgeoning study of interactions among microbes and the research on species interactions in the macro-world should not be ignored and should, in fact, be leveraged for additional insight. For example, a functional approach is being pursued in both subfields and may increase our ability to generalize across highly diverse systems [24,27], but the context dependence of such functional types themselves [28] must be recognized and investigated in tandem. Simple systems like these cross-feeding yeasts suggest numerous possible future experiments to study what happens when we add dimensions in the environment or the species pool under high levels of experimental control. This study emphasizes the importance of resource availability for orienting the interaction compass.  相似文献   

9.
10.
This Formal Comment provides clarifications on the authors’ recent estimates of global bacterial diversity and the current status of the field, and responds to a Formal Comment from John Wiens regarding their prior work.

We welcome Wiens’ efforts to estimate global animal-associated bacterial richness and thank him for highlighting points of confusion and potential caveats in our previous work on the topic [1]. We find Wiens’ ideas worthy of consideration, as most of them represent a step in the right direction, and we encourage lively scientific discourse for the advancement of knowledge. Time will ultimately reveal which estimates, and underlying assumptions, came closest to the true bacterial richness; we are excited and confident that this will happen in the near future thanks to rapidly increasing sequencing capabilities. Here, we provide some clarifications on our work, its relation to Wiens’ estimates, and the current status of the field.First, Wiens states that we excluded animal-associated bacterial species in our global estimates. However, thousands of animal-associated samples were included in our analysis, and this was clearly stated in our main text (second paragraph on page 3).Second, Wiens’ commentary focuses on “S1 Text” of our paper [1], which was rather peripheral, and, hence, in the Supporting information. S1 Text [1] critically evaluated the rationale underlying previous estimates of global bacterial operational taxonomic unit (OTU) richness by Larsen and colleagues [2], but the results of S1 Text [1] did not in any way flow into the analyses presented in our main article. Indeed, our estimates of global bacterial (and archaeal) richness, discussed in our main article, are based on 7 alternative well-established estimation methods founded on concrete statistical models, each developed specifically for richness estimates from multiple survey data. We applied these methods to >34,000 samples from >490 studies including from, but not restricted to, animal microbiomes, to arrive at our global estimates, independently of the discussion in S1 Text [1].Third, Wiens’ commentary can yield the impression that we proposed that there are only 40,100 animal-associated bacterial OTUs and that Cephalotes in particular only have 40 associated bacterial OTUs. However, these numbers, mentioned in our S1 Text [1], were not meant to be taken as proposed point estimates for animal-associated OTU richness, and we believe that this was clear from our text. Instead, these numbers were meant as examples to demonstrate how strongly the estimates of animal-associated bacterial richness by Larsen and colleagues [2] would decrease simply by (a) using better justified mathematical formulas, i.e., with the same input data as used by Larsen and colleagues [2] but founded on an actual statistical model; (b) accounting for even minor overlaps in the OTUs associated with different animal genera; and/or (c) using alternative animal diversity estimates published by others [3], rather than those proposed by Larsen and colleagues [2]. Specifically, regarding (b), Larsen and colleagues [2] (pages 233 and 259) performed pairwise host species comparisons within various insect genera (for example, within the Cephalotes) to estimate on average how many bacterial OTUs were unique to each host species, then multiplied that estimate with their estimated number of animal species to determine the global animal-associated bacterial richness. However, since their pairwise host species comparisons were restricted to congeneric species, their estimated number of unique OTUs per host species does not account for potential overlaps between different host genera. Indeed, even if an OTU is only found “in one” Cephalotes species, it might not be truly unique to that host species if it is also present in members of other host genera. To clarify, we did not claim that all animal genera can share bacterial OTUs, but instead considered the implications of some average microbiome overlap (some animal genera might share no bacteria, and other genera might share a lot). The average microbiome overlap of 0.1% (when clustering bacterial 16S sequences into OTUs at 97% similarity) between animal genera used in our illustrative example in S1 Text [1] is of course speculative, but it is not unreasonable (see our next point). A zero overlap (implicitly assumed by Larsen and colleagues [2]) is almost certainly wrong. One goal of our S1 Text [1] was to point out the dramatic effects of such overlaps on animal-associated bacterial richness estimates using “basic” mathematical arguments.Fourth, Wiens’ commentary could yield the impression that existing data are able to tell us with sufficient certainty when a bacterial OTU is “unique” to a specific animal taxon. However, so far, the microbiomes of only a minuscule fraction of animal species have been surveyed. One can thus certainly not exclude the possibility that many bacterial OTUs currently thought to be “unique” to a certain animal taxon are eventually also found in other (potentially distantly related) animal taxa, for example, due to similar host diets and or environmental conditions [47]. As a case in point, many bacteria in herbivorous fish guts were found to be closely related to bacteria in mammals [8], and Song and colleagues [6] report that bat microbiomes closely resemble those of birds. The gut microbiome of caterpillars consists mostly of dietary and environmental bacteria and is not species specific [4]. Even in animal taxa with characteristic microbiota, there is a documented overlap across host species and genera. For example, there are a small number of bacteria consistently and specifically associated with bees, but these are found across bee genera at the level of the 99.5% similar 16S rRNA OTUs [5]. To further illustrate that an average microbiome overlap between animal taxa at least as large as the one considered in our S1 Text (0.1%) [1] is not unreasonable, we analyzed 16S rRNA sequences from the Earth Microbiome Project [6,9] and measured the overlap of microbiota originating from individuals of different animal taxa. We found that, on average, 2 individuals from different host classes (e.g., 1 mammalian and 1 avian sample) share 1.26% of their OTUs (16S clustered at 100% similarity), and 2 individuals from different host genera belonging to the same class (e.g., 2 mammalian samples) share 2.84% of their OTUs (methods in S1 Text of this response). A coarser OTU threshold (e.g., 97% similarity, considered in our original paper [1]) would further increase these average overlaps. While less is known about insect microbiomes, there is currently little reason to expect a drastically different picture there, and, as explained in our S1 Text [1], even a small average microbiome overlap of 0.1% between host genera would strongly limit total bacterial richness estimates. The fact that the accumulation curve of detected bacterial OTUs over sampled insect species does not yet strongly level off says little about where the accumulation curve would asymptotically converge; rigorous statistical methods, such as the ones used for our global estimates [1], would be needed to estimate this asymptote.Lastly, we stress that while the present conversation (including previous estimates by Louca and colleagues [1], Larsen and colleagues [2], Locey and colleagues [10], Wiens’ commentary, and this response) focuses on 16S rRNA OTUs, it may well be that at finer phylogenetic resolutions, e.g., at bacterial strain level, host specificity and bacterial richness are substantially higher. In particular, future whole-genome sequencing surveys may well reveal the existence of far more genomic clusters and ecotypes than 16S-based OTUs.  相似文献   

11.
Active learning methods have been shown to be superior to traditional lecture in terms of student achievement, and our findings on the use of Peer-Led Team Learning (PLTL) concur. Students in our introductory biology course performed significantly better if they engaged in PLTL. There was also a drastic reduction in the failure rate for underrepresented minority (URM) students with PLTL, which further resulted in closing the achievement gap between URM and non-URM students. With such compelling findings, we strongly encourage the adoption of Peer-Led Team Learning in undergraduate Science, Technology, Engineering, and Mathematics (STEM) courses.Recent, extensive meta-analysis of over a decade of education research has revealed an overwhelming consensus that active learning methods are superior to traditional, passive lecture, in terms of student achievement in post-secondary Science, Technology, Engineering, and Mathematics (STEM) courses [1]. In light of such clear evidence that traditional lecture is among the least effective modes of instruction, many institutions have been abandoning lecture in favor of “flipped” classrooms and active learning strategies. Regrettably, however, STEM courses at most universities continue to feature traditional lecture as the primary mode of instruction.Although next-generation active learning classrooms are becoming more common, large instructor-focused lecture halls with fixed seating are still the norm on most campuses—including ours, for the time being. While there are certainly ways to make learning more active in an amphitheater, peer-interactive instruction is limited in such settings. Of course, laboratories accompanying lectures often provide more active learning opportunities. But in the wake of commendable efforts to increase rigorous laboratory experiences at the sophomore and junior levels at Syracuse University, a difficult decision was made for the two-semester, mixed-majors introductory biology sequence: the lecture sections of the second semester course were decoupled from the laboratory component, which was made optional. There were good reasons for this change, from both departmental and institutional perspectives. However, although STEM students not enrolling in the lab course would arguably be exposed to techniques and develop foundational process skills in the new upper division labs, we were concerned about the implications for achievement among those students who would opt out of the introductory labs. Our concerns were apparently warranted, as students who did not take the optional lab course, regardless of prior achievement, earned scores averaging a letter grade lower than those students who enrolled in the lab. However, students who opted out of the lab but engaged in Peer-Led Team Learning (PLTL) performed at levels equivalent to students who also took the lab course [2].Peer-Led Team Learning is a well-defined active learning model involving small group interactions between students, and it can be used along with or in place of the traditional lecture format that has become so deeply entrenched in university systems (Fig 1, adapted from [3]). PLTL was originally designed and implemented in undergraduate chemistry courses [4,5], and it has since been implemented in other undergraduate science courses, such as general biology and anatomy and physiology [6,7]. Studies on the efficacy of PLTL have shown improvements in students’ grade performance, attitudes, retention in the course [611], conceptual reasoning [12], and critical thinking [13], though findings related to the critical thinking benefits for peer leaders have not been consistent [14].Open in a separate windowFig 1The PLTL model.In the PLTL workshop model, students work in small groups of six to eight students, led by an undergraduate peer leader who has successfully completed the same course in which their peer-team students are currently enrolled. After being trained in group leadership methods, relevant learning theory, and the conceptual content of the course, peer leaders (who serve as role models) work collaboratively with an education specialist and the course instructor to facilitate small group problem-solving. Leaders are not teachers. They are not tutors. They are not considered to be experts in the content, and they are not expected to provide answers to the students in the workshop groups. Rather, they help mentor students to actively construct their own understanding of concepts.  相似文献   

12.
Blood vascular networks in vertebrates are essential to tissue survival. Establishment of a fully functional vasculature is complex and requires a number of steps including vasculogenesis and angiogenesis that are followed by differentiation into specialized vascular tissues (i.e., arteries, veins, and lymphatics) and organ-specific differentiation. However, an equally essential step in this process is the pruning of excessive blood vessels. Recent studies have shown that pruning is critical for the effective perfusion of blood into tissues. Despite its significance, vessel pruning is the least understood process in vascular differentiation and development. Two recently published PLOS Biology papers provide important new information about cellular dynamics of vascular regression.Vascular biology is a rapidly emerging field of research. Given the critical role the vasculature frequently plays in a wide range of common and serious diseases such as arteriosclerosis, ischemic diseases, cancer, and chronic inflammatory diseases, a better understanding of the formation, maintenance, and remodeling of blood vessels is of major importance.A mature vascular network is a highly anisotropic, hierarchical, and dynamic structure that has evolved to provide optimal oxygen delivery to tissues under a variety of conditions. Whilst much has been learned about early steps in vascular development such as vasculogenesis and angiogenesis, we still know relatively little about how such anatomical and functional organization is achieved. Furthermore, the dynamic nature of mature vascular networks, with its potential for extensive remodeling and a continuing need for stability and maintenance, is even less understood. The issue of optimal vascular density in tissue is of particular importance as several recent studies demonstrated that excessive vascularity may, in fact, reduce effective perfusion [13]. Since all neovascularization processes initially result in the formation of excessive amounts of vasculature, be that capillaries, arterioles, or venules, pruning must occur to return the vascular density to its optimal value in order to achieve effective tissue perfusion.Yet despite its functional importance, little is known about how regression of the once formed vasculature actually happens. While several potential mechanisms have been proposed including apoptosis of endothelial cells, intussusception vascular pruning, and endothelial cell migration away from the regressing vessel, cellular and molecular understanding of how this might happen is conspicuously lacking. Two articles recently published in PLOS Biology describe migration of endothelial cells as the key mechanism of apoptosis-independent vascular pruning and place it in a specific biologic context. This important advance offers not only a new understanding of a poorly understood aspect of vascular biology but may also prove to be of considerable importance in the development of pro- and anti-angiogenic therapies.To put vessel regression in context, it helps to briefly outline the current understanding of vessel formation. During embryonic development, vasculature forms in several distinct steps that begin with vasculogenesis, a step that involves differentiation of stem cells into primitive endothelial cells that then form initial undifferentiated and nonhierarchically organized lumenized vascular structures termed the primary plexus [4]. The primary plexus is then remodeled, by the process termed angiogenesis, into a more mature vascular network [5]. This remodeling event involves both formation of new vessels accomplished either by branching angiogenesis, a process dependent on tip cell-driven formation of new branches [6], or intussusception, a poorly understood process of splitting an existing vessel into two [7]. This incompletely differentiated and still nonhierarchical vasculature then further remodels into a number of distinctly different types of vessels such as capillaries, arteries, and veins. This requires fate specification, differentiation, and incorporation of various mural cells into evolving vascular structures. Finally, additional specialization of the vascular network occur in an organ-specific manner.Once formed, vascular networks require active maintenance as withdrawal of key signals, such as of ongoing fibroblast growth factor (FGF) or vascular endothelial growth factor (VEGF) stimulation, can lead to a rapid loss of vascular integrity and even changes in endothelial cell fate [812]. In addition, mature vessels retain the capacity for extensive remodeling and new growth as can be seen in a number of conditions from cancer to myocardial infarction and wound healing responses, among many others [5].A key issue common to both embryonic and adult vessel remodeling is how an existing lumenized vessel connected to the rest of the vasculature undergoes a change that results in its remodeling into something else. Such a change may involve either a new branch formation or regression of an existing branch, while the patency and integrity of the remaining circulation is maintained. Two types of cellular process leading to branching have been described—sprouting and intussusception. Formation of vascular branches by sprouting involves VEGF-A-induced expression of high levels of delta-like ligand 4 (Dll4) in a subset of endothelial cells at the leading edge of the vascular sprouts that are lying closest to the source of VEGF, thus converting them to a “tip cell” phenotype. Some of the key features of tip cells include the presence of cytoplasmic processes that extend into avascular (or hypoxic) tissue that form nascent branches. Dll4 expressed on tip cells binds Notch-1 receptor in neighboring endothelial cells, thereby activating their downstream Notch signaling. In turn, Notch signaling shuts down the formation of additional filopodia processes, converting these cells to a “stalk cell” phenotype and thereby avoiding excessive branching [1315]. The bone morphogenetic protein signaling pathway provides further input in determining stalk cell fate [16]. Importantly, tip cells are only partially lumenized; only once they have converted to a stalk phenotype does the lumen extend to what was a tip cell and its sprouts.An alternative mechanism of branching involves intussusception, a process by which a tissue pillar from the surrounding tissue splits the existing endothelial tube into two along its long axis, creating two adjusting vessels. While this process has been described morphologically, virtually nothing is known about its molecular and cellular regulation. In development, angiogenesis by intussusception occurs in vessels previously formed by sprouting angiogenesis [17,18]. Importantly, however, both sprouting angiogenesis and intussusception allow growth and remodeling of vascular network without any integrity compromise, thereby avoiding bleeding and related complications.There are certain parallels between vessel formation and branching and vessel regression. While growth occurs either via sprouting (a process linked to endothelial cell-migration) or intussusception, regression involves either “reverse intussusception,” endothelial migration-dependent regression, or apoptosis. The latter is the primary means of regression of the hyaloid vasculature in the eye and of the vascular loss seen in oxygen-induced retinopathy (OIR). In the case of hyaloid vasculature, secretion of WNT7b by macrophages invading the hyaloid membrane induces apoptosis of hyaloid endothelial cells leading to the regression of the entire hyaloid vasculature [19]. This total apoptosis-induced loss of hyaloid blood vessels contrasts with a less extensive vascular regression seen in the setting of OIR. In this condition, exposure of the developing retinal vasculature to abnormally high oxygen levels leads to vascular damage characterized by capillary pruning [20]. The pruning is the consequence of apoptosis of endothelial cells due to the toxic effect of a combination of high oxygen and low VEGF level. Interestingly, larger vessels and mature capillaries are not sensitive to hyperoxia [21].Intussusception vascular pruning was also described in a low VEGF level context in the chick chorioallantoic membrane. Application of VEGF-releasing hydrogels to the membrane surface results in formation of an excessive vasculature. Removal or degradation of the hydrogel induces an abrupt VEGF withdrawal. In this context, formation of transluminal pillars, similar to the ones seen in intussusception angiogenesis, is observed in vessels undergoing pruning [22]. The same process is observed in the tumor vasculature in the setting of anti-angiogenic therapy [23]. Finally, apoptosis-independent vascular regression, driven by endothelial cell migration, has been described in the mouse retina, yolk vessels of the chick and mouse embryos, branchial arches, and the zebrafish brain [2428].In all of these cases, only a subset of vessels is designated for pruning, and the selection of these vessels is highly regulated. Yet, factors involved in choosing a particular vascular branch for pruning remain ill-defined. One such factor is low blood flow [27,28]. Another is Notch signaling that has been shown to at least partially control vascular pruning in mouse retina and in intersegmental vessels (ISVs) in zebrafish [24]. Loss of Notch-regulated ankyrin repeat protein (Nrarp), target gene of Notch signaling, leads to an increase in vascular regression in these tissues due to a decrease in Wnt signaling-induced stalk cell proliferation. Similarly, in Dll4 +/- mice, developmental retinal vascular regression and OIR-induced vascular pruning are reduced [29], confirming the involvement of the Notch pathway in the control of vascular regression.The two factors may be linked, as low flow can affect endothelial shear stress and lead to a decrease in Notch activation. Such a link is suggested by studies on vascular regression in mice with endothelial expression of dominant negative NFκB pathway inhibitor that demonstrate excessive vascular growth but reduced tissue perfusion [2]. Molecular studies showed inhibition of flow- or cytokine-induced NFκB activation results in decreased Dll4 expression [2].Another important issue is the fate of endothelial cells from vessels undergoing pruning. In PLOS Biology, two groups recently described endothelial cell behavior during vascular pruning in three different models: the mouse retina, the ISVs in zebrafish, and the subintestinal vessel in zebrafish [30,31]. Using a high resolution time-lapse microscopy technique, Lenard and collaborators showed that vascular pruning during the subintestinal vessel formation occurs in two different ways. In type I pruning, the first step is the collapse of the lumen. Once that occurs, endothelial cells migrate and incorporate into the neighboring vessels. In type II pruning, the lumen is maintained. One endothelial cell in the center of the pruning vessel undergoes self-fusion, leading to a unicellular lumenized vessel. At the same time, other endothelial cells migrate away and incorporate into the neighboring vessels. The eventual lumen collapse is the last step after which the remaining single endothelial cell migrates and incorporates into one of the major vessels.Franco and collaborators described a pruning mechanism similar to the type I pruning described by Lenard et al., showing lumen disruption as an initial step in pruning of retinal vasculature in mice and ISVs in zebrafish [31]. By analyzing the first axial polarity map of endothelial cells in these models, they demonstrated that axial orientation predicts endothelial cell migration, and that migration-driven pruning occurs in vessels with low flow. Interestingly, migrating endothelial cells in regressing vessel display a tip cell phenotype with filopodia.The cellular dynamic of vessel pruning described here is the reverse of the cellular dynamic during anastomosis and angiogenesis [32]. Given the crucial role of factors as VEGF for the migration of endothelial cells during angiogenesis, can we go further and propose that other cytokines or cell–cell signaling may be involved in the migration of these endothelial cells? Indeed, low blood flow seems to be the cause of vessel pruning, but how can we explain the direction of endothelial cell migration, moreover with a tip cell morphology? Also, what determines the choice between type I and type II pruning? The collapse of lumen suggests a reorganization of the cytoskeleton, and a loss of polarity and electrostatic repulsion of endothelial cells. Molecular mechanisms leading from low shear stress to loss of endothelial cell polarity need further investigation. As defective vascular pruning could be involved in poor recovery after injury or ischemic accident, a better understanding of the molecular control of this phenomenon appears to have medical consequences. Another question that is still unanswered is the fate of mural cells that surrounded the pruned vessels. Small vessels are covered by pericytes, which have strong interaction with endothelial cells. How and when are these interactions disrupted? Are pericytes integrated into the neighboring vessel, or do they undergo apoptosis? Further studies are needed to understand the molecular and cellular mechanisms by which vasculature can adapt, even at the adult stage, to support the nutrient and oxygen needs of each cell.Overall, taking the results of these studies together with other recent developments in this field, the following picture is emerging (Fig 1). Under conditions of low blood flow in certain vascular tree branches, pruning will occur via endothelial cell migration out of these branches to the neighboring (presumably higher blood flow) vessels. This results in decreased total vascular cross-sectional area and increased average blood flow, thereby terminating further pruning. Importantly, this occurs without the loss of luminal integrity and without reduction in the total endothelial cell mass. At the same time, vessels that suddenly find themselves in a low VEGF environment will regress either by apoptosis of endothelial cells or by intussusception. In both cases, there is a reduction in the total vasculature without an increase in blood flow to this tissue. Thus, the local context determines the mechanism: migratory regression and remodeling in low shear stress versus apoptotic pruning in low VEGF milieu.Open in a separate windowFig 1Vessel regression under low flow versus low VEGF conditions.Vessel regression under low flow conditions proceeds by endothelial cell (EC) migration-driven regression, resulting in a decrease in total vessel areas but an increase in blood flow (left panel). Vessel regression under low VEGF conditions proceeds by EC apoptosis or intussusception regression, resulting in decreased vessel number and decreased flow to tissues subtended by the regressing vasculature (right panel). Image credit: Nicolas Ricard & Michael Simons.This distinction is likely to be of a significant practical importance, in particular in the context of therapies designed to facilitate vessel normalization in tumors after VEGF-targeting treatments and therapies designed to promote vascularization of mildly ischemic tissues as occurs, for example, in the setting of chronic stable angina and other similar conditions. In the former case, a precipitous drop in VEGF levels is likely to induce vascular regression by induction of endothelial apoptosis, and further promotion of apoptosis may facilitate this process. In contrast, in the latter case, low flow in newly formed collateral arteries may induce their regression by stimulating outmigration of endothelial cells, thereby limiting their beneficial functional impact. Therapies designed to inhibit this mechanism, therefore, may promote growth of the new functional vasculature.  相似文献   

13.
Carefully calibrated transmission models have the potential to guide public health officials on the nature and scale of the interventions required to control epidemics. In the context of the ongoing Ebola virus disease (EVD) epidemic in Liberia, Drake and colleagues, in this issue of PLOS Biology, employed an elegant modeling approach to capture the distributions of the number of secondary cases that arise in the community and health care settings in the context of changing population behaviors and increasing hospital capacity. Their findings underscore the role of increasing the rate of safe burials and the fractions of infectious individuals who seek hospitalization together with hospital capacity to achieve epidemic control. However, further modeling efforts of EVD transmission and control in West Africa should utilize the spatial-temporal patterns of spread in the region by incorporating spatial heterogeneity in the transmission process. Detailed datasets are urgently needed to characterize temporal changes in population behaviors, contact networks at different spatial scales, population mobility patterns, adherence to infection control measures in hospital settings, and hospitalization and reporting rates.Ebola virus disease (EVD) is caused by an RNA virus of the family Filoviridae and genus Ebolavirus. Five different Ebolavirus strains have been identified, namely Zaire ebolavirus (EBOV), Sudan ebolavirus (SUDV), Tai Forest ebolavirus (TAFV), Bundibugyo ebolavirus (BDBV), and Reston ebolavirus (RESTV). The great majority of past Ebola outbreaks in humans have been linked to three Ebola strains: EBOV, SUDV, and BDBV [1]. The Ebola virus ([EBOV] formerly designated Zaire ebolavirus) derived its name from the Ebola River, located near the epicenter of the first outbreak identified in 1976 in Zaire (now the Democratic Republic of Congo). EVD outbreaks among humans have been associated with direct human exposure to fruit bats—the most likely reservoir of the virus—or through contact with intermediate infected hosts, which include gorillas, chimpanzees, and monkeys. Outbreaks have been reported on average every 1.5 years [2]. Past EVD outbreaks have occurred in relatively isolated areas and have been limited in size and duration (Fig. 1). It has been recently estimated that about 22 million people living in areas of Central and West Africa are at risk of EVD [3].Open in a separate windowFigure 1Time series of the temporal progression of four past EVD outbreaks in Congo (1976, 1995, 2014) [46] and Uganda (2000) [7].An epidemic of EVD (EBOV) has been spreading in West Africa since December 2013 in Guinea, Liberia, and Sierra Leone [8]. A total of 18,603 cases, with 6,915 deaths, have been reported to the World Health Organization as of December 17, 2014 [9]. While the causative strain associated with this epidemic is closely related to that of past outbreaks in Central Africa [10], three key factors have contributed disproportionately to this unprecedented epidemic: (1) substantial delays in detection and implementation of control efforts in a region characterized by porous borders; (2) limited public health infrastructure including epidemiological surveillance systems and diagnostic testing [11], which are necessary for the timely diagnosis of symptomatic individuals, effective isolation of infectious individuals, contact tracing to rapidly identify new cases, and providing supportive care to increase the chances of survival to EVD infection; and (3) cultural practices that involve touching the body of the deceased and the association of illness with witchcraft or conspiracy theories.EBOV is transmitted by direct human-to-human contact via body fluids or indirect contact with contaminated surfaces, but it is not spread through the airborne route. Individuals become symptomatic after an average incubation period of 10 days (range 2–21 days) [12], and infectiousness is increased during the later stages of disease [13]. The characteristic symptoms of EVD are nonspecific and include sudden onset of fever, weakness, vomiting, diarrhea, headache, and a sore throat, while only a fraction of the symptomatic individuals present with hemorrhagic manifestations [14]. The case fatality risk (CFR), calculated as the proportion of deaths among the total number of EVD cases with known outcomes, has been estimated from data of the first 9 months of the epidemic in West Africa at 70.8% (95% CI 68.6–72.8), in broad agreement with estimates from past outbreaks [12].Two important quantities to understand in the transmission dynamics of EVD are the serial interval and the basic reproduction number. The serial interval is defined as the time from illness onset in a primary case to illness onset in a secondary case [15] and has been estimated at 15 days on average for the ongoing epidemic [12]. The basic reproduction number, R 0, quantifies transmission potential at the beginning of an epidemic and is defined as the average number of secondary cases generated by a typical infected individual during the early phase of an epidemic, before interventions are put in place [16]. If R 0 < 1, transmission is not sufficient to generate a major epidemic. In contrast, a major epidemic is likely to occur whenever R 0 > 1. When transmission potential is measured over time t, the effective reproduction number Rt, can be helpful to quantify the time-dependent transmission potential resulting from the effect of control interventions and behavior changes [17]. Estimates of R 0 for the ongoing epidemic in West Africa have fluctuated around 2 with some uncertainty (e.g., [12, 1822]), which are in good agreement with estimates from past EVD outbreaks [23]. R 0 could also vary across regions as a function of the local public health infrastructure (e.g., availability of health care settings and infection control protocols), such that an outbreak may be very unlikely to unfold in developed countries simply as a result of baseline infection control measures in place (i.e., R 0 < 1) while poor countries with extremely weak or absent public health systems may be unable to control an Ebola outbreak (i.e., R 0 > 1).Mathematical models of disease transmission have proved to be useful tools to characterize the transmission dynamics of infectious diseases and evaluate the effects of control intervention strategies in order to inform public health policy [16, 24, 25]. There are a limited number of mathematical models for the transmission and control of EVD, but a number of efforts are underway in the context of the epidemic in West Africa. The transmission dynamics of EVD have been modeled on the basis of the simple compartmental susceptible-exposed-infectious-removed (SEIR) model that assumes a homogenously mixed population [23]. The modeled population can be structured according to the contributions of community, hospital, and unsafe burials to transmission as EVD transmission has been amplified in health care settings with ineffective infection control measures and during unsafe burials [23]. A schematic representation of the main transmission pathways of EVD is shown in Fig. 2.Open in a separate windowFigure 2Schematic representation of the transmission dynamics of Ebola virus disease.A recent study published in PLOS Biology by Drake and colleagues [26] presents an interesting and flexible modeling framework for the transmission and control of EVD in Liberia. Their framework is based on a multi-type branching process model in which “multi-type” refers to the consideration of two types of settings where transmission can occur, while “branching process” is the mathematical term to specify a probabilistic model. For instance, in the case of a single-type branching process, the transmission dynamics are simply described using a single reproduction number, i.e., the average number of secondary cases produced by a single primary case. However, when two types of hosts are considered in the transmission process, two reproduction numbers are needed to characterize within-group mixing (e.g., within-hospital and within-community transmission) and two reproduction numbers characterize transmission between groups (e.g., transmission from hospital to community and vice versa).Drake and colleague’s elegant modeling approach describes EVD transmission according to infection generations by calculating probability distributions of the number of secondary cases that arise in the community via nursing care or during unsafe burials and in health care settings via infections to health care workers and visitors. The model explicitly accounts for the hospitalization rate—the fraction of infectious individuals in the community seeking hospitalization (estimated in this study at 60%). However, the number of effectively isolated infectious individuals is constrained by the number of available beds in treatment centers—which are assumed in this study to operate at twice their regular capacity. It is important to note that the number of beds available to treat EVD patients was severely limited in Liberia prior to mid August 2014 (Fig. 1 in [26]). Moreover, the rate of safe burials that reduces the force of infection is included in their model as an increasing function of time. The model was calibrated by tuning six parameters to fit the trajectories of the number of reported cases in the community and among health care workers during the period 4 July to 2 September 2014 for a total of four infection generations during which the effective reproduction number was estimated to decline on average from about 2.8 to 1.4. The model was able to effectively capture heterogeneity in transmission of EVD in both the community and hospital settings.Drake and colleagues [26] employed their calibrated model to forecast the epidemic trajectory in Liberia from 3 September to 31 December 2014 under different scenarios that account for an increasing fraction of cases seeking hospitalization and a surge in the number of beds available to isolate and treat EVD patients. Their results indicate that allocating 1,700 additional beds (100 new beds every 4 days) in new Ebola treatment centers committed by US aid reduces the mean epidemic size to ~51,000 (60% reduction with respect to the baseline scenario), while epidemic control by mid-March is only plausible through a 4-fold increase in the number of beds committed by US aid and enhancing the hospitalization rate from 60% to 99% for a final epidemic size of 12,285. Moreover, an additional epidemic forecast incorporating data up to 1 December 2014 indicated that containment could be achieved between March and June 2015.Other interventions were not explicitly incorporated in their model because it is difficult to parameterize them in the absence of datasets that permit statistical estimation of their impact on the transmission dynamics. These additional interventions include the use of household protection kits, designed to reduce transmission in the community; improvements in infection control protocols in health care settings that reduce transmission among health care workers; and the impact of rapid diagnostic kits in Ebola treatment centers, which reduce the time to isolation for infectious individuals seeking hospitalization. Increasing awareness and education of the population about the disease could have also yielded further reductions in case incidence by reducing the size of the at-risk susceptible population (Fig. 3) [27]. Nevertheless, some of these effects could have been indirectly captured implicitly by the time-dependent safe burial rate parameter in their model.Open in a separate windowFigure 3Contrasting epidemic growth in the presence and absence of behavior changes that reduce the transmission rate.Importantly, prior models of EVD transmission [23, 28, 29] and the model by Drake and colleagues have not incorporated spatial heterogeneity in the transmission dynamics. In particular, the EVD epidemic in West Africa can be characterized as a set of asynchronous local (e.g., district) epidemics that exhibit sub-exponential growth, which could be driven by a highly clustered underlying contact network or population behavior changes induced by the accumulation of morbidity and mortality rates (see Fig. 4 and [30]). EVD contagiousness is most pronounced in the later and more severe stages of Ebola infection when infectious individuals are confined at home or health care settings and mostly exposed to caregivers (e.g., health care workers, family members) [30]. This characterization would lead to EVD transmission over a network of contacts that is highly clustered (e.g., individuals are likely to share a significant fraction of their contacts), which is associated with significantly slower spread relative to the common random mixing assumption as illustrated in Fig. 5. The development of transmission models that incorporate spatial heterogeneity (e.g., by modeling spatial coupling or human migration) is currently limited by the shortage of detailed datasets from the EVD-affected areas about the geographic distribution of households, health care settings, reporting and hospitalization rates across urban and rural areas, and patterns of population mobility in the region. Some of these limitations may be overcome in the near future. For instance, cell phone data could provide a basis to characterize population mobility in the region at a refined spatial scale.Open in a separate windowFigure 4Representative time series of the cumulative number of EVD cases (in log scale) at the district level in Guinea, Sierra Leone, and Liberia.Open in a separate windowFigure 5Epidemic growth in two populations characterized by two different underlying contact networks.The ongoing epidemic in West Africa offers a unique opportunity to improve our current understanding of the transmission characteristics of EVD in humans. To achieve this goal, it is crucial to collect spatial-temporal data on population behaviors, contact networks, social distancing measures, and education campaigns. Datasets comprising detailed demographic, socio-economic, contact rates, and population mobility estimates in the region (e.g., commuting networks, air traffic) need to be integrated and made publicly available in order to develop highly resolved transmission models, which could guide control strategies with greater precision in the context of the EVD epidemic in West Africa. Although recent data from Liberia indicates that the epidemic is on track for eventual control, the epidemic in Sierra Leone continues an increasing trend, and in Guinea, case incidence roughly follows a steady trend. The potential impact of vaccines should also be incorporated in future modeling efforts as these pharmaceutical interventions are expected to become available in the upcoming months.  相似文献   

14.
15.
With the increasing appreciation for the crucial roles that microbial symbionts play in the development and fitness of plant and animal hosts, there has been a recent push to interpret evolution through the lens of the “hologenome”—the collective genomic content of a host and its microbiome. But how symbionts evolve and, particularly, whether they undergo natural selection to benefit hosts are complex issues that are associated with several misconceptions about evolutionary processes in host-associated microbial communities. Microorganisms can have intimate, ancient, and/or mutualistic associations with hosts without having undergone natural selection to benefit hosts. Likewise, observing host-specific microbial community composition or greater community similarity among more closely related hosts does not imply that symbionts have coevolved with hosts, let alone that they have evolved for the benefit of the host. Although selection at the level of the symbiotic community, or hologenome, occurs in some cases, it should not be accepted as the null hypothesis for explaining features of host–symbiont associations.The ubiquity and importance of microorganisms in the lives of plants and animals are ever more apparent, and increasingly investigated by biologists. Suddenly, we have the aspiration and tools to open up a new, complicated world, and we must confront the realization that almost everything about larger organisms has been shaped by their history of evolving from, then with, microorganisms [1]. This development represents a dramatic shift in perspective—arguably a revolution—in modern biology.Do we need to revamp basic tenets of evolutionary theory to understand how hosts evolve with associated microorganisms? Some scientists have suggested that we do [2], and the recently introduced terms “holobiont” and “hologenome” encapsulate what has been described as an “emerging postmodern synthesis” [3]. Holobiont was initially used to refer to a host and a single inherited symbiont [4] but was later extended to a host and its community of associated microorganisms, specifically for the case of corals [5]. The idea of the holobiont is that a host and its associated microorganisms must be considered as an integrated unit in order to understand many biological and ecological features.The later introduction of the term hologenome [2,6,7] sought to describe a holobiont by its genetic composition. The term has been used in different ways by different authors, but in most contexts a hologenome is considered a genetic unit that represents the combined genomes of a host and its associated microorganisms [8]. This non-controversial definition of hologenome is linked to the idea that this entity has a role in evolution. For example, Gordon et al. [1,9] state, "The genome of a holobiont, termed the hologenome, is the sum of the genomes of all constituents, all of which can evolve within that context." That last phrase is sufficiently general that it can be interpreted in any number of ways. Like physical conditions, associated organisms can be considered as part of the environment and thus can be sources of natural selection, affecting evolution in each lineage.But a more sweeping and problematic proposal is given by originators of the term, which is that "the holobiont with its hologenome should be considered as the unit of natural selection in evolution" [2,7] or by others, that “an organism’s genetics and fitness are inclusive of its microbiome” [3,4]. The implication is that differential success of holobionts influences evolution of participating organisms, such that their observed features cannot be fully understood without considering selection at the holobiont level. Another formulation of this concept is the proposal that the evolution of host–microbe systems is “most easily understood by equating a gene in the nuclear genome to a microbe in the microbiome” [8]. Under this view, interactions between host and microbial genotypes should be considered as genetic epistasis (interactions among alleles at different loci in a genome) rather than as interactions between the host’s genotype and its environment.While biologists would agree that microorganisms have important roles in host evolution, this statement is a far cry from the claim that they are fused with hosts to form the primary units of selection, or that hosts and microorganisms provide different portions of a unified genome. Broadly, the hologenome concept contends, first, that participating lineages within a holobiont affect each other’s evolution, and, second, that that the holobiont is a primary unit of selection. Our aim in this essay is to clarify what kinds of evidence are needed for each of these claims and to argue that neither should be assumed without evidence. We point out that some observations that superficially appear to support the concept of the hologenome have spawned confusion about real biological issues (Box 1).

Box 1. Misconceptions Related to the Hologenome Concept

Misconception #1: Similarities in microbiomes between related host species result from codiversification. Reality: Related species tend to be similar in most traits. Because microbiome composition is a trait that involves living organisms, it is tempting to assume that these similarities reflect a shared evolutionary history of host and symbionts. This has been shown to be the case for some symbioses (e.g., ancient maternally inherited endosymbionts in insects). But for many interactions (e.g., gut microbiota), related hosts may have similar effects on community assembly without any history of codiversification between the host and individual microbial species (Fig 1B).Open in a separate windowFig 1Alternative evolutionary processes can result in related host species harboring similar symbiont communities.Left panel: Individual symbiont lineages retain fidelity to evolving host lineages, through co-inheritance or other mechanisms, with some gain and loss of symbiont lineages over evolutionary time. Right panel: As host lineages evolve, they shift their selectivity of environmental microbes, which are not evolving in response and which may not even have been present during host diversification. In both cases, measures of community divergence will likely be smaller for more closely related hosts, but they reflect processes with very different implications for hologenome evolution. Image credit: Nancy Moran and Kim Hammond, University of Texas at Austin. Misconception #2: Parallel phylogenies of host and symbiont, or intimacy of host and symbiont associations, reflect coevolution. Reality: Coevolution is defined by a history of reciprocal selection between parties. While coevolution can generate parallel phylogenies or intimate associations, these can also result from many other mechanisms. Misconception #3: Highly intimate associations of host and symbionts, involving exchange of cellular metabolites and specific patterns of colonization, result from a history of selection favoring mutualistic traits. Reality: The adaptive basis of a specific trait is difficult to infer even when the trait involves a single lineage, and it is even more daunting when multiple lineages contribute. But complexity or intimacy of an interaction does not always imply a long history of coevolution nor does it imply that the nature of the interaction involves mutual benefit. Misconception #4: The essential roles that microbial species/communities play in host development are adaptations resulting from selection on the symbionts to contribute to holobiont function. Reality: Hosts may adapt to the reliable presence of symbionts in the same way that they adapt to abiotic components of the environment, and little or no selection on symbiont populations need be involved. Misconception #5: Because of the extreme importance of symbionts in essential functions of their hosts, the integrated holobiont represents the primary unit of selection. Reality: The strength of natural selection at different levels of biological organization is a central issue in evolutionary biology and the focus of much empirical and theoretical research. But insofar as there is a primary unit of selection common to diverse biological systems, it is unlikely to be at the level of the holobiont. In particular cases, evolutionary interests of host and symbionts can be sufficiently aligned such that the predominant effect of natural selection on genetic variation in each party is to increase the reproductive success of the holobiont. But in most host–symbiont relationships, contrasting modes of genetic transmission will decouple selection pressures.  相似文献   

16.
17.
Life history theory predicts that trait evolution should be constrained by competing physiological demands on an organism. Immune defense provides a classic example in which immune responses are presumed to be costly and therefore come at the expense of other traits related to fitness. One strategy for mitigating the costs of expensive traits is to render them inducible, such that the cost is paid only when the trait is utilized. In the current issue of PLOS Biology, Bajgar and colleagues elegantly demonstrate the energetic and life history cost of the immune response that Drosophila melanogaster larvae induce after infection by the parasitoid wasp Leptopilina boulardi. These authors show that infection-induced proliferation of defensive blood cells commands a diversion of dietary carbon away from somatic growth and development, with simple sugars instead being shunted to the hematopoetic organ for rapid conversion into the raw energy required for cell proliferation. This metabolic shift results in a 15% delay in the development of the infected larva and is mediated by adenosine signaling between the hematopoietic organ and the central metabolic control organ of the host fly. The adenosine signal thus allows D. melanogaster to rapidly marshal the energy needed for effective defense and to pay the cost of immunity only when infected.While sitting around the campfire, evolutionary biologists may tell tales of the Darwinian Demon [1], a mythical being who is reproductively mature upon birth, lives forever, and has infinite offspring. In the morning, though, we know that no such creature can exist. All organisms must make compromises, and fitness is determined by striking the optimal balance among traits with competing demands. This is the central premise of life history theory: adaptations are costly, and increasing investment in one trait often forces decreased investment in others [2].Phenotype plasticity provides a partial solution to this evolutionary conundrum. If traits can be called upon only when required, the costs can be mitigated during periods of disuse. There are many examples of plastic costly adaptations, including the defensive helmets grown by Daphnia species in response to the presence of predators or abiotic factors that signal risk of predation ([3,4] and references therein), the flamboyant plumage that male birds exhibit during breeding season [5], and the inducible immune systems of higher plants and animals [6,7].The very inducibility of immune systems implicitly argues for their cost. If immune defense was cost-free, it would be constantly deployed for maximum protection against pathogenic infection. However, immune reactions have frequently been inferred to be energetically demanding (e.g., [8,9]) and carry the risk of autoimmune damage (e.g., [10,11]). It may therefore be evolutionarily adaptive to minimize immune activity in the absence of infection, to rapidly ramp up immunity in response to infection, and then to quickly shut down the immune response after the infection has been managed [12,13]. A paper by Bajgar et al. in the current issue of PLOS Biology [14] uses the DrosophilaLeptopilina host–pathogen system to quantitatively measure the energetic expense of induced up-regulation of immunity, demonstrating plastic metabolic reallocation toward immune cell proliferation at the expense of nutrient storage, growth, and development.Parasitoid wasps such as Leptopilina species infect their insect larval hosts by laying an egg inside the host body cavity [15]. Unimpeded, the egg hatches into a wasp larva that feeds off and develops inside the still-living host. In the case of Leptopilina boulardi infecting Drosophila melanogaster, the infected host larva survives to enter pupation, but if the parasitization is successful, an adult wasp will ultimately emerge from the pupal case instead of an adult fruit fly. This clearly is against the interest of the D. melanogaster host, so the larval fly attempts to encapsulate the L. boulardi egg in a sheath of specialized blood cells called lamellocytes that collaborate with other cell types to suffocate the wasp egg, deprive it of nutrients, and kill it with oxidative free radicals [16]. The struggle between fly and wasp is of the highest stakes, with guaranteed death and complete loss of evolutionary fitness for the loser.Natural D. melanogaster populations are rife with genetic variation for resistance to parasitoids, and laboratory selection for as few as five generations can increase host survivorship from 1%–5% to 40%–60% (e.g., [17,18]). This evolved resistance comes at a cost, though. Larvae from evolved resistant strains have decreased capacity to compete with their unselected progenitors under crowded or nutrient-poor conditions [17,18]. The general mechanism for enhanced resistance has been recurrently revealed to be an increase in the number of circulating blood cells (hemocytes). But why are the resistant larvae outcompeted by susceptible larvae? One compelling hypothesis is that the extra investment in blood cell proliferation comes at an energetic cost to development of other tissues, a cost which may be exacerbated by a decrease in feeding rate of the selected lines [19] and that leads to impaired development under nutrient-limiting conditions. The cost can be limited, however, by producing the defensive blood cells in large numbers only when the host is infected and has need of them. Effectively achieving this requires the capacity to rapidly signal cell proliferation and to recruit the energy required for hematopoiesis from other physiological processes.Bajgar et al. [14] use a series of careful experiments to document energetic redistributions and costs associated with hemocyte proliferation and lamellocyte differentiation in D. melanogaster infected by L. boulardi. They find that parasitoid infection results in a 15% delay in host development, and that at least a fraction of this delay can be plausibly attributed to a metabolic reallocation that supports blood cell proliferation over somatic growth. Specifically, they find that dietary carbohydrates are shunted away from energetic storage and tissue development, and instead are routed to the lymph gland, where hemocytes are being produced and differentiated. The signal to activate this reallocation emanates from the lymph gland and developing hemocytes themselves in the form of secreted adenosine, which is then received and interpreted by the central metabolic control organ, the fat body. Bajgar et al. [14] are able to show in unprecedented quantitative precision, that secreted adenosine acts as a signal to rapidly trigger inducible immune defense, that this immune induction is costly at the level of individual tissues and the whole organism, and that the cost of induced defense is paid only upon infection.The D. melanogaster metabolic rearrangement after infection by L. boulardi is profound [14]. Within 6–18 hours of infection, host larvae show a strong reduction in the incorporation of dietary carbon into stored lipids and protein and an overall reduction in glycogen stores. The levels of circulating glucose and trehalose spike, with those saccharides seemingly directed to the hemocyte-producing lymph gland. Expression of glycolytic genes is suppressed in the fat body, while fat body expression of a trehalose transporter is up-regulated to promote trehalose secretion into circulation. Glucose and trehalose transporters are concomitantly up-regulated in the lymph gland and developing hemocytes to facilitate import of the circulating saccharides. Genes required for glycolysis—but not the TCA cycle—are simultaneously up-regulated in the lymph gland and nascent hemocytes to turn those sugars into quick energy under the Warburg effect [20]. The totality of the data is consistent with an interpretation that the induced hemocyte proliferation is directly costly to host larval growth as an immediate consequence of substantial energetic reallocation.This same research group had previously shown that extracellular adenosine can be used by Drosophila as a signal in regulating inflammation-like responses [21]. In the present study, they use RNAi to knock down expression of the adenosine transporter ENT2 specifically in the lymph gland and developing hemocytes. This prevention of adenosine export eliminates the spike in circulating glucose and trehalose after infection. Infected larvae with knocked down expression of ENT2 continue to allocate dietary carbon to lipid and proteins as though they were uninfected, they fail to differentiate an adequate number of lamellocytes, and they suffer reduced resistance to the parasitoid. These data firmly implicate extracellular adenosine as a critical trigger mediating the switch in energetic allocation from growth and development to induced immunity. In further support of this interpretation, larvae that are mutant for the adenosine receptor AdoR show a nearly 70% reduction relative to wild type in the number of differentiated lamellocytes after infection and consequently encapsulate the parasitoid egg at nearly 4-fold lower rates. Like the ENT2 knockdowns, AdoR mutant larvae fail to exhibit elevated levels of circulating glucose or trehalose when infected by L. boulardi, although surprisingly, the AdoR mutants do show an appropriate inhibition of glycogen storage when infected. Overall, the genetic results indicate that ENT2 protein allows secretion of adenosine from the lymph gland. This adenosine signal is received in the fat body, which then dampens energetic storage in favor of secretion and circulation of simple saccharides. These sugars are taken up by the lymph gland to rapidly facilitate hemocyte proliferation, lamellocyte differentiation, and immune defense against the parasitoid. In satisfying consistency with this model, Bajgar et al. [14] show that lamellocyte differentiation and host resistance can be partially rescued even in the absence of ENT2 or AdoR by supplementing the D. melanogaster diet with extra glucose, thereby providing the boost in the circulating saccharides required for hematopoiesis.There are key differences between studies of evolutionary tradeoffs, such as those conducted by Kraaijeveld and colleagues (e.g., [1719]), and studies of physiological costs such as the one conducted by Bajgar et al. [14]. The evolutionary tradeoffs exposed by experimental selection are experienced even in the absence of parasitoid infection. Blood cell number is constitutively higher in the evolved resistant strains, which presumably allows a faster and more robust defense against a wasp egg. However, the resistant larvae always suffer the cost when reared in competitive conditions. Because the costs arise from a constitutive investment in defense, regardless of whether parasitoids are present, these are sometimes referred to as “maintenance” costs or fixed costs [22]. In contrast, Bajgar et al. have measured a physiological “deployment” cost [22], which is conditionally experienced only once the host activates an immune response.It remains an open question the extent to which maintenance costs and deployment costs share mechanistic bases. In the present example, Bajgar et al. [14] show clearly that the machinery is in place to route energetic investment away from growth and development in favor of hemocyte proliferation. Extrapolating from the laboratory selection experiments [1719], a natural increase in the epidemiological risk of parasitization in the wild might favor greater constitutive investment in hemocyte production. In such a scenario, it is easy to imagine the adaptive value of a mutation or genetic variant that results in higher expression of ENT2 in the lymph gland in the absence of infection, driving constitutively higher hemocyte number and protection against infection at the expense of constitutively lower nutrient storage and growth. In this way, a plastic switch could be converted into a hardwired trait, a deployment cost would become a maintenance cost, and a physiological tradeoff would become an evolutionary one. However, it is important to appreciate that there typically are many solutions to any biological problem, and constitutively elevated blood cell number could also evolve via mechanisms unrelated to adenosine signaling. A key challenge for life history biologists will be to bridge physiological studies such as that by Bajgar and colleagues [14] with the evolutionary studies that preceded it, testing whether physiological and evolutionary tradeoffs share a mechanistic basis.While in the bright light of day, evolutionary biologists can agree that the Darwinian Demon is just a ghost story, we have precious few examples of evolutionary or physiological tradeoffs where the mechanistic bases are well understood. More careful quantitative and genetic studies like that of Bajgar et al. [14] are necessary to carry us beyond reliance on abstract concepts like “resource pools” [2] and into mechanistic understanding of how life history tradeoffs operate on both the physiology of individuals and the evolution of species.  相似文献   

18.
From bacteria to multicellular animals, most organisms exhibit declines in survivorship or reproductive performance with increasing age (“senescence”) [1],[2]. Evidence for senescence in clonal plants, however, is scant [3],[4]. During asexual growth, we expect that somatic mutations, which negatively impact sexual fitness, should accumulate and contribute to senescence, especially among long-lived clonal plants [5],[6]. We tested whether older clones of Populus tremuloides (trembling aspen) from natural stands in British Columbia exhibited significantly reduced reproductive performance. Coupling molecular-based estimates of clone age with male fertility data, we observed a significant decline in the average number of viable pollen grains per catkin per ramet with increasing clone age in trembling aspen. We found that mutations reduced relative male fertility in clonal aspen populations by about 5.8×10−5 to 1.6×10−3 per year, leading to an 8% reduction in the number of viable pollen grains, on average, among the clones studied. The probability that an aspen lineage ultimately goes extinct rises as its male sexual fitness declines, suggesting that even long-lived clonal organisms are vulnerable to senescence.  相似文献   

19.
The modern evolutionary synthesis codified the idea that species exist as distinct entities because intrinsic reproductive barriers prevent them from merging together. Understanding the origin of species therefore requires understanding the evolution and genetics of reproductive barriers between species. In most cases, speciation is an accident that happens as different populations adapt to different environments and, incidentally, come to differ in ways that render them reproductively incompatible. As with other reproductive barriers, the evolution and genetics of interspecific hybrid sterility and lethality were once also thought to evolve as pleiotripic side effects of adaptation. Recent work on the molecular genetics of speciation has raised an altogether different possibility—the genes that cause hybrid sterility and lethality often come to differ between species not because of adaptation to the external ecological environment but because of internal evolutionary arms races between selfish genetic elements and the genes of the host genome. Arguably one of the best examples supporting a role of ecological adaptation comes from a population of yellow monkey flowers, Mimulus guttatus, in Copperopolis, California, which recently evolved tolerance to soil contaminants from copper mines and simultaneously, as an incidental by-product, hybrid lethality in crosses with some off-mine populations. However, in new work, Wright and colleagues show that hybrid lethality is not a pleiotropic consequence of copper tolerance. Rather, the genetic factor causing hybrid lethality is tightly linked to copper tolerance and spread to fixation in Copperopolis by genetic hitchhiking.New species arise when populations gradually evolve intrinsic reproductive barriers to interbreeding with other populations [1][3]. Two species can be reproductively isolated from one another in ways that prevent the formation of interspecific hybrids—the species may, for instance, have incompatible courtship signals or occupy different ecological habitats. Two species can also be reproductively isolated from one another if interspecific hybrids are formed but are somehow unfit—the hybrids may be sterile, inviable, or may simply fall between parental ecological niches. All forms of reproductive isolation limit the genetic exchange between species, preventing their fusion and facilitating their further divergence. Understanding the genetic and evolutionary basis of speciation—a major cause of biodiversity—therefore involves understanding the genetics and evolutionary basis of the traits that mediate reproductive isolation.Most reproductive barriers arise as incidental by-products of selection—either ecological adaptation or sexual selection. For these cases, the genetic basis of speciation is, effectively, the genetics of adaptation. But hybrid sterility and lethality have historically posed two special problems. Darwin [4] devoted an entire chapter of his Origin of Species to the first problem: as the sterility or lethality of hybrids provides no advantage to parents, how could the genetic factors involved possibly evolve by natural selection? The second problem was recognized much later [5], after the rediscovery of Mendelian genetics: if two species (with genotypes AA and aa) produce, say, sterile hybrids (Aa) due to an incompatibility between the A and a alleles, then how could, e.g., the AA genotype have evolved from an aa ancestor in the first place without passing through a sterile intermediate genotype (Aa)? Not only does natural selection not directly favor the evolution of hybrid sterility or lethality, but there is reason to believe natural selection positively prevents its evolution.Together these problems stymied evolutionists and geneticists for decades. T.H. Huxley [6] and William Bateson [5], writing decades apart, each branded the evolution of hybrid sterility one of the most serious challenges for a then-young evolutionary theory. Darwin had, in fact, offered a simple solution to the first problem. Namely, hybrid sterility and lethality are not advantageous per se but rather “incidental on other acquired differences" [4]. Then Bateson [5], in a few short, forgotten lines solved the second problem (see [7]). Later, Dobzhansky [2] and Muller [8] would arrive at the same solution, showing that hybrid sterility or lethality could evolve readily, unopposed by natural selection, under a two-locus model with epistasis. In particular, they imagined that separate populations diverge from a common ancestor (genotype aabb), with the A allele becoming established in one population (AAbb) and the B allele in the other (aaBB); while A and B alleles must function on their respective genetic backgrounds, there is no guarantee that the A and B alleles will be functionally compatible with one another. Hybrid sterility and lethality most likely result from incompatible complementary genetic factors that disrupt development when brought together in a common hybrid genome. Dobzhansky [2] and Muller [8] could point to a few supporting data in fish, flies, and plants. Notably, like Darwin, neither speculated on the forces responsible for the evolution of the genetic factors involved.Today, there is no doubt that the Dobzhansky-Muller model is correct, as the data for incompatible complementary genetic factors is now overwhelming [1],[9]. In the last decade, a fast-growing number of speciation genes involved in these genetic incompatibilities have been identified in mice, fish, flies, yeast, and plants [9][11]. Perhaps not surprisingly, these speciation genes often have histories of recurrent, adaptive protein-coding sequence evolution [10],[11]. The signature of selection at speciation genes has been taken by some as tacit evidence for the pervasive role of ecological adaptation in speciation, including the evolution of hybrid sterility and lethality [12]. What is surprising, however, from the modern molecular analysis of speciation genes is how often their rapid sequence evolution and functional divergence seems to have little to do with adaptation to external ecological circumstances. Instead, speciation genes often (but not always [9][11]) seem to evolve as by-products of evolutionary arms races between selfish genetic elements—e.g., satellite DNAs [13],[14], meiotic drive elements [15], cytoplasmic male sterility factors [16]—and the host genes that regulate or suppress them [9][11],[17]. The notion that selfish genes are exotic curiosities is now giving way to a realization that selfish genes are common and diverse, each generation probing for transmission advantages at the expense of their bearers, fueling evolutionary arms races and, not infrequently, contributing to the genetic divergence that drives speciation. Indeed, the case has become so strong that examples of hybrid sterility and lethality genes that have evolved in response to ecological challenges (other than pathogens) appear to be the exception [9],[11],[17].Perhaps the most clear-cut case in which a genetic incompatibility seems to have evolved as a by-product of ecological adaptation comes from populations of the yellow monkey flower, Mimulus guttatus, from Copperopolis (California, U.S.A.). In the last ∼150 years, the Copperopolis population has evolved tolerance to the tailings of local copper mines (Figure 1). These copper-tolerant M. guttatus plants also happen to be partially reproductively isolated from many off-mine M. guttatus plants, producing hybrids that suffer tissue necrosis and death. In classic work, Macnair and Christie showed that copper tolerance is controlled by a single major factor [18] and hybrid lethality, as expected under the Dobzhansky-Muller model, by complementary factors [19]. Surprisingly, in crosses between tolerant and nontolerant plants, hybrid lethality perfectly cosegregates with tolerance [19],[20]. The simplest explanation is that the copper tolerance allele that spread to fixation in the Copperopolis population also happens to cause hybrid lethality as a pleiotropic by-product. The alternative explanation is that the copper tolerance and hybrid lethality loci happen to be genetically linked; when the copper tolerance allele spread to fixation in Copperopolis, hybrid lethality hitchhiked to high frequency along with it [20]. But with 2n = 28 chromosomes, the odds that copper tolerance and hybrid lethality alleles happen to be linked would seem vanishingly small [20].Open in a separate windowFigure 1Yellow monkey flowers (Mimulus guttatus) growing in the heavy-metal contaminated soils of copper-mine tailings.In this issue, Wright and colleagues [21] revisit this classic case of genetic incompatibility as a by-product of ecological adaptation. They make two discoveries, one genetic and the other evolutionary. By conducting extensive crossing experiments and leveraging the M. guttatus genome sequence (www.mimulusevolution.org), Wright et al. [21] map copper tolerance and hybrid necrosis to tightly linked but genetically separable loci, Tol1 and Nec1, respectively. Hybrid lethality is not a pleiotropic consequence of copper tolerance. Instead, the tolerant Tol1 allele spread to fixation in Copperopolis, and the tightly linked incompatible Nec1 allele spread with it by genetic hitchhiking. In a turn of bad luck, the loci happen to fall in a heterochromatic pericentric region, where genome assemblies are often problematic, putting identification of the Tol1 and Nec1 genes out of immediate reach. Wright et al. [21] were, however, able to identify linked markers within ∼0.3 cM of Tol1 and place Nec1 within a 10-kb genomic interval that contains a Gypsy3 retrotransposon, raising two possibilities. First, the Gypsy3 element is unlikely to cause hybrid lethality directly; instead, as transposable elements are often epigenetically silenced in plants, it seems possible that the Nec1-associated Gypsy3 is silenced with incidental consequences for gene expression on a gene (or genes) in the vicinity [22]. Second, although the Nec1 interval is 10-kb in the reference genome of M. guttatus, it could be larger in the (not-yet-sequenced) Copperopolis population, perhaps harboring additional genes.With Tol1 and Nec1 mapped near and to particular genomic scaffolds, respectively, Wright et al. were able to investigate the evolutionary history of the genomic region. Given the clear adaptive significance of copper tolerance in Copperopolis plants, we might expect to see the signatures of a strong selective sweep in the Tol1 region—a single Tol1 haplotype may have spread to fixation so quickly that all Copperopolis descendant plants bear the identical haplotype and thus show strongly reduced population genetic variability in the Tol1-Nec1 region relative to the rest of the genome [23],[24]. After the selective sweep is complete, variability in the region ought to recover gradually as new mutations arise and begin to fill out the mutation-drift equilibrium frequency spectrum expected for neutral variation in the Copperopolis population [25],[26]. Given that Tol1 reflects an adaptation to mine tailings established just ∼150 generations ago, there would have been little time for such a recovery. And yet, while Wright et al. find evidence of moderately reduced genetic variability in the Tol1-Nec1 genomic region, the magnitude of the reduction is hardly dramatic relative to the genome average.How, then, is it possible that the Tol1-Nec1 region swept to fixation in Copperopolis in fewer than ∼150 generations and yet left no strong footprint of a hitchhiking event? One possibility is that rather than a single, unique Tol1-Nec1 haplotype contributing to fixation, causing a “hard sweep," multiple Tol1-Nec1 haplotypes sampled from previously standing genetic variation contributed to fixation, causing a “soft sweep" [27]. A soft sweep would be plausible if Tol1 and Nec1 both segregate in the local off-mine ancestral population and if the two were, coincidentally, found on the same chromosome more often than expected by chance (i.e., in linkage disequilibrium). Then, after the copper mines were established, multiple plants with multiple Tol1 haplotypes (and, by association, Nec1) could have colonized the newly contaminated soils of the mine tailings. Tol1 segregates at ∼9% in surrounding populations, suggesting that standing genetic variation for copper tolerance may well have been present in the ancestral populations.Two big questions remain for the Tol1-Nec1 story, and both would be readily advanced by identification of Tol1 and Nec1. The first question concerns the history of Tol1 haplotypes in Copperopolis and surrounding off-mine populations. As Nec1-mediated hybrid lethality is incomplete, the ∼9% Tol1 frequency in surrounding populations could reflect its export via gene flow from the Copperopolis populations. Conversely, if there was a soft sweep from standing Tol1 variation in surrounding off-mine populations, then Tol1 and Nec1 may still be in linkage disequilibrium in those populations (assuming ∼150 years of recombination has not broken up the association). Resolving these alternative possibilities is a matter of establishing the history of movement of Tol1 haplotypes into or out of the Copperopolis population. The soft sweep scenario, if correct, presents a population genetics puzzle: during the historical time that mutations accumulated among the multiple tolerant but incompatible Tol1-Nec1 haplotypes in the ancestral off-mine populations, why did recombination fail to degrade the association, giving rise to tolerant but compatible haplotypes?The second question concerns the identity of Nec1 (or if it really is a Gypsy3 element, the identity of the nearby gene whose expression is disrupted as a consequence). The answer bears on one of the new emerging generalizations about genetic incompatibilities in plants [9]. Recently, Bomblies and Weigel [28] synthesized a century''s worth of observations on the commonly seen necrosis phenotype in plant hybrids and, based on their own genetic analyses in Arabidopsis [29], suggested that many of these cases may have a common underlying basis: incompatibilities between plant pathogen resistance genes can cause autoimmune responses that result in tissue necrosis and hybrid lethality. Hybrid necrosis, indeed, appears to involve pathogen resistance genes across multiple plants groups [9],[28]. It remains to be seen if the Nec1-mediated lethality provides yet another instance.  相似文献   

20.
Lymph nodes are meeting points for circulating immune cells. A network of reticular cells that ensheathe a mesh of collagen fibers crisscrosses the tissue in each lymph node. This reticular cell network distributes key molecules and provides a structure for immune cells to move around on. During infections, the network can suffer damage. A new study has now investigated the network’s structure in detail, using methods from graph theory. The study showed that the network is remarkably robust to damage: it can still support immune responses even when half of the reticular cells are destroyed. This is a further important example of how network connectivity achieves tolerance to failure, a property shared with other important biological and nonbiological networks.Lymph nodes are critical sites for immune cells to connect, exchange information, and initiate responses to foreign invaders. More than 90% of the cells in each lymph node—the T and B lymphocytes of the adaptive immune system—only reside there temporarily and are constantly moving around as they search for foreign substances (antigen). When there is no infection, T and B cells migrate within distinct regions. But lymph node architecture changes dramatically when antigen is found, and an immune response is mounted. New blood vessels grow and recruit vast numbers of lymphocytes from the blood circulation. Antigen-specific cells divide and mature into “effector” immune cells. The combination of these two processes—increased influx of cells from outside and proliferation within—can make a lymph node grow 10-fold within only a few days [1]. Accordingly, the structural backbone supporting lymph node function cannot be too rigid; otherwise, it would impede this rapid organ expansion. This structural backbone is provided by a network of fibroblastic reticular cells (FRCs) [2], which secrete a form of collagen (type III alpha 1) that produces reticular fibers—thin, threadlike structures with a diameter of less than 1 μm. Reticular fibers cross-link and form a spider web–like structure. The FRCs surrounding this structure form the reticular cell network (Fig 1), which was first observed in the 1930s [3]. Interestingly, experiments in which the FRCs were destroyed showed that the collagen fiber network remained intact [4].Open in a separate windowFig 1Structure of the reticular cell network.The reticular cell network is formed by fibroblastic reticular cells (FRCs) whose cell membranes ensheathe a core of collagen fibers that acts as a conduit system for the distribution of small molecules [5]. In most other tissues, collagen fibers instead reside outside cell membranes, where they form the extracellular matrix. Inset: graph structure representing the FRCs in the depicted network as “nodes” (circles) and the direct connections between them as “edges” (lines). Shape and length of the fibers are not represented in the graph.Reticular cell networks do not only support lymph node structure; they are also important players in the immune response. Small molecules from the tissue environment or from pathogens, such as viral protein fragments, can be distributed within the lymph node through the conduit system formed by the reticular fibers [5]. Some cytokines and chemokines that are vital for effective T cell migration—and the nitric oxide that inhibits T cell proliferation [6]—are even produced by the FRCs themselves. Moreover, the network is thought of as a “road system” for lymphocyte migration [7]: in 2006, a seminal study found that lymphocytes roaming through lymph nodes were in contact with network fibers most of the time [8]. A few years before, it had become possible to observe lymphocyte migration in vivo by means of two-photon microscopy [9]. Movies from these experiments strikingly demonstrated that individual cells were taking very different paths, engaging in what appeared to be a “random walk.” But these movies did not show the structures surrounding the migrating cells, which created an impression of motion in empty space. Appreciating the role of the reticular cell network in this pattern of motion [8] suggested that the complex cell trajectories reflect the architecture of the network along which the cells walk.Given its important functions, it is surprising how little we know about the structure of the reticular cell network—compared to, for instance, our wealth of knowledge on neuron connectivity in the brain. In part this is because the reticular cells are hard to visualize. In vivo techniques like two-photon imaging do not provide sufficient resolution to reliably capture the fine-threaded mesh. Instead, thin tissue sections are stained with fluorescent antibodies that bind to the reticular fibers and are imaged with high-resolution confocal microscopy to reveal the network structure. One study [10] applied this method to determine basic parameters such as branch length and the size of gaps between fibers. Here, we discuss a recent study by Novkovic et al. [11] that took a different approach to investigating properties of the reticular cell network structure: they applied methods from graph theory.Graph theory is a classic subject in mathematics that is often traced back to Leonhard Euler’s stroll through 18th-century Königsberg, Prussia. Euler could not find a circular route that crossed each of the city’s seven bridges exactly once, and wondered how he could prove that such a route does not exist. He realized that this problem could be phrased in terms of a simple diagram containing points (parts of the city) and lines between them (bridges). Further detail, such as the layout of city’s streets, was irrelevant. This was the birth of graph theory—the study of objects consisting of points (nodes) connected by lines (edges). Graph theory has diverse applications ranging from logistics to molecular biology. Since the beginning of this century, there has been a strong interest in applying graph theory to understand the structure of networks that occur in nature—including biological networks, such as neurons in the brain, and more recently, social networks like friendships on Facebook. Various mathematical models of network structures have been developed in an attempt to understand network properties that are relevant in different contexts, such as the speed at which information spreads or the amount of damage that a network can tolerate before breaking into disconnected parts. Three well-known network topologies are random, small-world, and scale-free networks (Box 1). Novkovic et al. modeled reticular cell networks as graphs by considering each FRC to be a node and the fiber connections between FRCs to be edges (Fig 1).

Box 1. Graph Theory and the Robustness of Real Networks

After the publication of several landmark papers on network topology at the end of the previous century, the science of complex networks has grown explosively. One of these papers described “small-world” networks [16] and demonstrated that several natural networks have the amazing property that the average length of shortest paths between arbitrary nodes is unexpectedly small (making it a “small world”), even if most of the network nodes are clustered (that is, when neighbors of neighbors tend to be neighbors). The Barabasi group published a series of papers describing “scale-free” networks [17,18] and demonstrated that scale-free networks are extremely robust to random deletions of nodes—the vast majority of the nodes can be deleted before the network falls apart [15]. In scale-free networks, the number of edges per node is distributed according to a power law, implying that most nodes have very few connections, and a few nodes are hubs having very many connections. Thus, the topology of complex networks can be scale-free, small-world, or neither, such as with random networks [19]. Novkovic et al. [11] describe the clustering of the edges of neighbors and the average shortest path–length between arbitrary nodes, finding that reticular cell networks have small-world properties. Whether or not these networks have scale-free properties is not explicitly examined in the paper, but given that they are embedded in a three-dimensional space, that they “already” lose functionality when about 50% of the FRCs are ablated, and that the number of connected protrusions per FRC is not distributed according to a power law (see the data underlying their Figure 2g), reticular cell networks are not likely to be scale-free. Thus, the enhanced robustness of reticular cell networks is most likely due to their high local connectivity: Networks lose functionality when they fall apart in disconnected components, and high clustering means that the graph is unlikely to split apart when a single node is removed, because the neighbors of that node tend to stay connected [14]. Additionally, since the reticular cell network has a spatial structure (unlike the internet or the Facebook social network), its high degree of clustering is probably due to the preferential attachment to nearby FRCs when the network develops, which agrees well with Novkovic et al.’s recent classification as a small-world network with lattice-like properties [11].Some virus infections are known to damage reticular cell networks [12], either through infection of the FRCs or as a bystander effect of inflammation. It is therefore important to understand to what extent the network structure is able to survive partial destruction. Novkovic et al. first approached this question by performing computer simulations, in which they randomly removed FRC nodes from the networks they had reconstructed from microscopy images. They found that they had to remove at least half of the nodes to break the network apart into disconnected parts. To study the effect of damage on the reticular cell network in vivo rather than in silico, Novkovic et al. used an experimental technique called conditional cell ablation. In this technique, a gene encoding the diphtheria toxin receptor (DTR) is inserted after a specific promoter that leads it to be expressed in a particular cell type of interest. Administration of diphtheria toxin kills DTR-expressing cells, leaving other cells unaffected. By expressing DTR under the control of the FRC-specific Ccl19 promoter, Novkovic et al. were able to selectively destroy the reticular cell network and then watch it grow back over time. Regrowth took about four weeks, and the resulting network properties were no different from a network formed naturally during development. Thus, it seems that the reticular cell network structure is imprinted and reemerges even after severe damage. This finding ties in nicely with previous data from the same group [13], showing that reticular cell networks form even in the absence of lymphotoxin-beta receptor, an otherwise key player in many aspects of lymphoid tissue development. Together, these data make a compelling case that network formation is a robust fundamental trait of FRCs.Next, Novkovic et al. varied the dose of diphtheria toxin such that only a fraction of FRCs were destroyed, effectively removing a random subset of the network nodes. They measured in two ways how FRC loss affects the immune system: they tracked T cell migration using two-photon microscopy and they determined the amount of antiviral T cells produced by the mice after an infection. Remarkably, as predicted by their computer simulations, lymph nodes appeared capable of tolerating the loss of up to half of FRCs with little effect on either T cell migration or the numbers of activated antiviral T cells. Only when more than half of the FRCs were destroyed did T cell motion slow down significantly and the mice were no longer able to mount effective antiviral immune responses. Such a tolerance of damage is impressive—for comparison, consider what would happen if one were to close half of London’s subway stations!Robustness to damage is of interest for many different networks, from power grids to the internet [14]. In particular, the “scale-free” architecture that features rare, strongly connected “hub” nodes is highly robust to random damage [15]. Novkovic et al. did not address whether the reticular cell network is scale-free, but it is likely that it isn’t (Box 1). Instead, the network’s robustness probably arises from its high degree of clustering, which means that the neighbors of each node are likely to be themselves also neighbors. If a node is removed from a clustered network, then there is still likely a short detour available by going through two of the neighbors. Therefore, one would have to randomly remove a large fraction of the nodes before the network structure breaks down. High clustering in the network could be a consequence of the fact that multiple fibers extend from each FRC and establish connections to many FRCs in its vicinity. A question not yet addressed by Novkovic et al. is how robust reticular cell networks would be to nonrandom damage, such as a locally spreading viral infection. In fact, scale-free networks are drastically more vulnerable to targeted rather than random damage: the United States flight network can come to a grinding halt by closing a few hub airports [15]. Less is known about the robustness to nonrandom damage for other network architectures, and the findings by Novkovic et al. motivate future research in this direction.Novkovic et al. did not yet explicitly identify all mechanisms that hamper T cell responses when more than half of the FRCs are depleted. But given the reticular cell network’s many different functions, this could occur in several ways. For instance, severe depletion might prevent the secretion of important molecules, halt the migration of T cells, prevent the anchoring of antigen-presenting dendritic cells (DCs) on the network, or cause structural disarray in the tissue. In addition to the effects on T cell migration, Novkovic et al. also showed that the amount of DCs in fact decreased when FRCs were depleted, emphasizing that several mechanisms are likely at play. Disentangling these mechanisms will require substantial additional research efforts.The current reticular cell network reconstruction by Novkovic et al. is based on thin tissue slices. It will be exciting to study the network architecture when it can be visualized in the whole organ. Some aspects of the network may then look different. For instance, those FRCs that are near the border of a slice will have their degree of connectivity underestimated, as not all of their neighbors in the network can be seen. Further refinements of the network analysis may also consider that reticular fibers are real physical objects situated in a three-dimensional space (unlike abstract connections such as friendships). Migrating T cells may travel quicker via a short, straight fiber than on a long, curved one, but the network graph does not make this distinction. More generally, it would be interesting to understand conceptually how reticular cell networks help foster immune cell migration. While at first it appears obvious that having a “road system” should make it easier for cells to roam lymph node tissue, three different theoretical studies have in fact all concluded that effective T cell migration should also be possible in the absence of a network [2022]. A related question is whether T cells are constrained to move only on the network or are merely using it for loose guidance. For instance, could migrating T cells be in contact with two or more network fibers at once or with none at all? This would make the relationship between cell migration and network structure more complex than the graph structure alone suggests.There is also some evidence that T cells can migrate according to what is called a Lévy walk [23]—a kind of random walk where frequent short steps are interspersed with few very long steps, a search strategy that appears to occur frequently in nature (though this is debated [24]). While there is so far no evidence that T cells perform a Lévy walk when roaming the lymph node [25], this may be in part due to limitations of two-photon imaging, and one could speculate that reticular cell networks might in fact be constructed in a way that facilitates this or another efficient kind of “search strategy.” Resolving this question will require substantial improvements in imaging technology, allowing individual T cells to be tracked across an entire lymph node.No doubt further studies will address these and other questions, and provide further insights on how reticular cell networks benefit immune responses. Such advances may help us design better treatments against infections that damage the network. It may also help us understand how we can best administer vaccines or tumor immune therapy treatments in a way that ensures optimal delivery to immune cells in the lymph node. As is nicely illustrated by the study of Novkovic et al., mathematical methods may well play key roles in this quest.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号