共查询到20条相似文献,搜索用时 78 毫秒
1.
2.
Adam Hulman Yuri D. Foreman Martijn C. G. J. Brouwers Abraham A. Kroon Koen D. Reesink Pieter C. Dagnelie Carla J. H. van der Kallen Marleen M. J. van Greevenbroek Kristine Frch Dorte Vistisen Marit E. Jrgensen Coen D. A. Stehouwer Daniel R. Witte 《PLoS biology》2021,19(3)
In response to a study previously published in PLOS Biology, this Formal Comment thoroughly examines the concept of ’glucotypes’ with regard to its generalisability, interpretability and relationship to more traditional measures used to describe data from continuous glucose monitoring.Although the promise of precision medicine has led to advances in the recognition and treatment of rare monogenic forms of diabetes, its impact on prevention and treatment of more common forms of diabetes has been underwhelming [1]. Several approaches to the subclassification of individuals with, or at high risk of, type 2 diabetes have been published recently [2–4]. Hall and colleagues introduced the concept of “glucotypes” in a research article [3] that has received enormous attention in the highest impact scientific journals [5–8], mostly in relation to precision medicine. The authors developed an algorithm to identify patterns of glucose fluctuations based on continuous glucose monitoring (CGM). They named the 3 identified patterns: “low variability,” “moderate variability,” and “severe variability” glucotypes. Each individual was characterised by the proportion of time spent in the 3 glucotypes and was assigned to an overall glucotype based on the highest proportion. They argued that glucotypes provide the advantage of taking into account a more detailed picture of glucose dynamics, in contrast to commonly used single time point or average-based measures, revealing subphenotypes within traditional diagnostic categories of glucose regulation. Even though the study was based on data from only 57 individuals without a prior diabetes diagnosis, others have interpreted the results as indicating that glucotypes might identify individuals at an early stage of glucose dysregulation, suggesting a potential role in diabetes risk stratification and prevention [5]. However, before glucotypes can become “an important tool in early identification of those at risk for type 2 diabetes” [3], the concept requires thorough validation. Therefore, we explore the generalisability and interpretability of glucotypes and their relationship to traditional CGM-based measures.We used data from The Maastricht Study [9] and the PRE-D Trial [10] comprising a total number of 770 diabetes-free individuals with a 7-day CGM registration. We observed that the average proportion of time spent in the low variability glucotype was low both in The Maastricht Study (6%) and the PRE-D Trial (4%), compared to 20% in the original study. A reason for the difference may be that our study populations were on average 11 to 12 years older and that the PRE-D Trial (n = 116) included only overweight and obese individuals with prediabetes. In The Maastricht Study, the median (Q1 to Q3) body mass index was 25.9 kg/m2 (23.4 to 28.7), and 72% had normal glucose tolerance. As a logical consequence, the severe glucotype was most common in the PRE-D Trial (55%). Regardless, our data show that the initial estimates of the different glucotype prevalences do not necessarily generalise to other populations, especially in age groups at increased risk of type 2 diabetes.Hall and colleagues described glucotypes as a new measure of glucose variability, a clinically relevant metric of glycaemic patterns [3]. In the figures accompanying the original publication, the low variability pattern was characterised by both the lowest mean glucose level and variation, while the severe pattern had both the highest mean glucose level and variation. As such, these examples did not give an intuition whether glucotypes were predominantly driven by glucose variability or by mean glucose levels. We therefore present 3 examples from the PRE-D Trial (Fig 1). The first 2 profiles are very similar with regard to glucose variability. Thus, the driver of the most severe glucotype of the second participant is clearly the slightly higher mean glycaemic level. Also, even though the third participant has a much larger variation than the first two, the proportion of time in the severe glucotype is not higher than for the second participant as one would expect from a classical measure of glucose variability. To investigate this further, we assessed the association between glucotypes and classical CGM measures, i.e., the mean CGM glucose level (Fig 2A) and the coefficient of variation (Fig 2B) in The Maastricht Study. The scatterplots show a clear association between the mean CGM glucose and glucotypes. They also suggest that participants with a high proportion of time in the moderate glucotype do not have high variation in glucose. Rather than a biological feature, this may well be a methodological consequence of being assigned to the middle cluster. If large fluctuations were present, glucose levels would reach either low or high values, resulting in a higher proportion of time spent in the low or severe glucotypes, respectively (assuming a strong association between glucotypes and mean CGM glucose). Therefore, we decided to quantify this association using regression analysis where glucotype proportions were the outcomes, and the mean CGM glucose concentration was the independent variable modelled with natural cubic splines (more details on the specification of the models are given in Supporting information S1–S3 Codes). Then, we used the equation estimated in The Maastricht Study to predict glucotypes in the external validation sample (PRE-D Trial, Fig 2C). First, similarly to Hall and colleagues, we assigned individuals to the pattern with the highest proportion of time and then compared the predicted and the observed glucotypes. We found that in 107 out of 116 individuals, the glucotype was predicted correctly when using only the mean CGM glucose value. When considering the glucotypes as continuous proportions of time, the root mean squared errors (RMSEs) were 0.05, 0.09, and 0.07 for the low, moderate, and severe variability glucotypes, respectively, indicating good predictive ability. These results demonstrate that glucotypes either mainly reflect the mean CGM glucose level or do not translate to external datasets (e.g., due to overfitting). To investigate this further, we conducted the same analyses as described for the PRE-D Trial in the original data from Hall and colleagues and found a slightly weaker, but still strong association between mean CGM glucose levels and glucotypes. Using the regression model from The Maastricht Study, we could correctly predict 79% of the glucotypes, while the RMSEs were 0.11, 0.15, and 0.13.Open in a separate windowFig 1Example CGM profiles of participants in the PRE-D Trial with corresponding proportion of time spent in different glucotypes and conventional measures (mean and CV).CGM, continuous glucose monitoring; CV, coefficient of variation.Open in a separate windowFig 2Observed proportion of time spent in the 3 glucotypes by mean CGM glucose (A) and coefficient of variation (B) in The Maastricht Study, and by mean CGM glucose in the PRE-D Trial (C) alongside predicted proportions based on the regression analysis in The Maastricht Study. CGM, continuous glucose monitoring.Although the transformation of continuous measures into categorical ones is a common procedure in clinical research, assigning individuals to the glucotype with the highest proportion of time runs very much against the “precision” tenet of precision medicine. In line with this, a recent study has demonstrated how simple clinical features outperformed clusters in predicting relevant clinical outcomes [11]. This is especially problematic when a method does not provide clear separation between clusters, which can be quantified by calculating relative entropy [12]. A relative entropy of zero would mean that all individuals spend one-third of the time in each of the 3 glucotypes, while a value of one would indicate that each individual spends the entire time period in only one of the 3 glucotypes. In the original cohort of Hall and colleagues [3], we calculated a relative entropy of 0.24 indicating that cluster separation is far from optimal and together with the previous results question the claim that the glucotype is really a “more comprehensive measure of the pattern of glucose excursions than the standard laboratory tests in current use” [3].In conclusion, we demonstrate in 2 large, external datasets, that the assessment of glucotypes does not offer more novel insights than the mean CGM glucose, highlighting the importance of large development datasets and external validation for data-driven algorithms. As CGM is becoming more widely used in large clinical studies also among individuals without diabetes, glucose patterns derived from CGMs will be an important focus area in future diabetes research. However, it is important that scientific scrutiny precedes the introduction of emerging tools with a promise of identifying individuals at high risk of type 2 diabetes and its late complications at an earlier stage of disease progression, especially in an observational setting. Furthermore, future efforts towards precision medicine for diabetes prevention and treatment should go beyond the glucocentric approach we have seen so far. We know that hyperglycaemia is a late feature of diabetes development and that patients benefit most from a multifactorial treatment approach [13]. A multifactorial approach, with relevance to the aetiology of micro- and macrovascular complications, may also yield a more clinically useful risk stratification of nondiabetic individuals [14]. Even so, if we aim for precision medicine, we should aim to retain as much precision as possible at every step of the process, by treating determinants and outcomes as continuous measures if possible and by retaining information on the uncertainty of any hard classification such as cluster membership. 相似文献
3.
In response to the Hong Kong Principles for assessing researchers, this Formal Comment argues that it is time to take gender and diversity considerations seriously in the pursuit of fostering research integrity; this requires acknowledging and reshaping the influence of research assessment criteria on researcher representation.The Hong Kong Principles (HKP) for assessing researchers [1], a product of the 2019 World Conference on Research Integrity, were published in PLOS Biology this past July. The principles concern research institutions’ assessment of researchers according to responsible research criteria. The HKP value issues ranging from complete reporting and open science to a diversity of other essential research tasks (e.g., peer reviewing).We applaud this initiative and believe it is an important step forward because it directly addresses a root cause of many issues that erode research integrity: the unfair reward structures and perverse incentives that researchers encounter [2]. Reforming research assessment practice to reward responsible research, rather than privileging publication volume, is crucial for incentivizing research integrity.We were surprised that HKP explicitly refrain from considering gender and other issues related to diversity and inclusiveness in researcher assessment. They rather state that, “[t]hese themes require an assessment of a group of researchers (e.g., research institution) when making decisions about funding allocations or human resources policies. Furthermore, these issues concern the social justice and societal relevance of research rather than research integrity.” (p. 9) [1]. We disagree on a number of counts.First, we challenge the assertion that gender and diversity issues concern social justice and societal relevance of research rather than research integrity. Such a strong distinction between societal relevance and research integrity is difficult to justify; although the field of research integrity was traditionally narrowly defined as pertaining to misconduct issues, it is increasingly acknowledged as addressing general issues of research quality, relevance, and reliability [3]. Furthermore, diversity in research teams is not only important for issues related to social justice and societal relevance, but also crucial for maintaining scientific objectivity and trust in science [4]. Researchers’ backgrounds influence the way that research is funded, conducted, and applied; to prevent science from becoming biased toward certain assumptions and avoid gaps in knowledge, diverse research teams are needed [4]. A lack of diversity in the research community can be detrimental because, through shutting out important perspectives from the research process, it can create undesirable scientific and social effects. For instance, current health research methods commonly entail gender bias, possibly due to the underrepresentation of women in leading research and publishing positions, which not only distorts the public health knowledge base but can also lead to health disparities [4]. Similarly, a lack of early attention and research on the differential impact of the Coronavirus Disease 2019 (COVID-19) on people of different ethnic groups has posed a challenge in curbing mortality and poor health outcomes among Black, Asian, and other ethnic minority groups in several countries such as the United Kingdom and the United States of America [5]. When the research knowledge base is biased in terms of gender or other types of diversity, as is the case with these examples, the trustworthiness of the research itself and its benefit for society are undermined, as it becomes questionable whether the research has employed the right questions and methods to elicit relevant findings for society. Therefore, inclusion of diverse perspectives should not just focus on improving participation of patients and other citizens in research—good practices highlighted in the HKP’s article—but also by improving representation in research teams themselves.Second, in our view, current researcher assessment practices are funding allocation schemes or human resource policies of research institutions, which affect individual researchers and systematically disadvantage entire groups of researchers, including women and those from a minority background [6]. For instance, the focus on number of publications in researcher assessment disadvantages researchers (mostly female) who need to temporarily take leave to have children [7]. To improve representation in relation to gender and diversity within research teams and departments, it is essential to pay attention to their influence beyond individual assessment performance. The HKP article [1] describes how recognizing other tasks, such as peer review and mentoring, leads to an increase in the number of women promoted (p. 8). Other research suggests that using altmetrics to assess research impact might help narrow the gap between men and women [8]. Hence, the individual assessment of researchers is intimately related to group performance. It is disappointing that the HKP fail to recognize this or to call for attention to the impact of their recommended assessment criteria on diversity issues.Our plea to the research integrity community is to take gender and diversity considerations seriously, especially in the pursuit of fostering research integrity. This means researcher assessment approaches which acknowledge that systemic disadvantages can be introduced or exacerbated with individual assessment criteria and which contribute toward improving representation within research teams and across seniority levels. 相似文献
4.
Peter J. Hotez 《PLoS biology》2021,19(1)
The United States witnessed an unprecedented politicization of biomedical science starting in 2015 that has exploded into a complex, multimodal anti-science empire operating through mass media, political elections, legislation, and even health systems. Anti-science activities now pervade the daily lives of many Americans, and threaten to infect other parts of the world. We can attribute the deaths of tens of thousands of Americans from COVID-19, measles, and other vaccine-preventable diseases to anti-science. The acceleration of anti-science activities demands not only new responses and approaches but also international coordination. Vaccines and other biomedical advances will not be sufficient to halt COVID-19 or future potentially catastrophic illnesses, unless we simultaneously counter anti-science aggression.This Essay argues that COVID-19 exposed how a rising tide of anti-science rhetoric and activities can dramatically exploit society''s vulnerabilities to an infectious disease, suggesting that anti-science extremism has become as big a threat as the virus itself.
“Without science, democracy has no future.”—Maxim Gorky, April 1917The newest (October 2020) projections from the University of Washington Institute of Health Metrics and Evaluation (IHME) Coronavirus Disease 2019 (COVID-19) forecasting team reveal a grim reality. Their estimates indicate that more than 510,000 Americans could lose their lives by February 28, 2021 [1], representing more than a doubling of the current estimates of 220,000 deaths (although not all groups agree with these estimates). For most of 2020, the US has been the epicenter of the COVID-19 pandemic, leading the world in new cases and deaths. This dire situation is a consequence of our government’s failure to launch a coordinated national response and roadmap and refusing to aggressively promote nonpharmaceutical interventions (NPIs), especially face masks, social distancing mandates, school closures, testing, and contact tracing [2]. In its place, and as the cases and deaths mounted, the White House and its coronavirus task force, and famously the President himself, organized a campaign of disinformation [3].Central to White House anti-science promotion efforts were attempts by key officials to downplay the severity of COVID-19 and its long-haul consequences, inflate the curative properties of certain medicines such as hydroxychloroquine, falsely attribute COVID-19 deaths to comorbidities in order to artificially reduce actual disease mortality rates, and make scientifically unsubstantiated claims about herd immunity (or its links to the Great Barrington Declaration, which argued without evidence that restrictions cause more harm than the virus). There were also efforts to discredit the effectiveness of face masks to prevent COVID-19 or to refuse implementing mask mandates, invoking at times new political terms or slogans that gained popularity in recent years such as “health freedom” or “medical freedom” [4]. This is exemplified by a recent October 22 tweet from the Republican Governor Kristi Noem of South Dakota [5]:
If folks want to wear a mask, they are free to do so. Those who don’t want to wear a mask shouldn’t be shamed into it, and govt should not mandate it. We need to respect each other’s decisions. In SD, we know a little common courtesy can go a long way.The open questioning of face masks or refusal to enforce mandates will likely continue to have tragic consequences for the American people. According to the IHME COVID-19 forecasting team, 95% public mask use would save almost 130,000 lives from September 22, 2020, through February 28, 2021 [1]. Thus, anti-science disinformation that advocates shunning masks could inflict a mass casualty event in the US. Its occurrence should not surprise us. Instead, our tragic loss of American lives would reflect the handiwork of an evolving anti-science movement that aggressively accelerated in the last 5 years beginning in California and Texas. In this Essay, I argue that to understand how a nation state might seek to attack and dissolve modern biomedicine, it is helpful to revisit a tragic period in 20th century Russia (see Box 1). The relentless attacks on science and scientists during Stalin’s Great Purge and the ascendancy of Lysokenkoism and other pseudoscientific theories provide a useful framework for addressing some stark reminders about the politicization of science occurring now in America, even if they play out at a far lesser scale.Box 1. Lessons from a dark chapter in historyOne of the darkest chapters in the history of the Soviet Union, the Great Purge, or the Great Terror (Большой террор), saw the widespread imprisonment, execution, and persecution of millions considered an enemy of Joseph Stalin’s government. It began following the assassination of Sergei Kirov in 1934, a Soviet leader and revolutionary, before halting in 1938, although significant elements of the purge remained throughout the 1940s. The intelligentsia was a Great Purge target, as were entire fields of science, including astrophysics, which was ultimately deemed as a “political platform” running counter to Marxism [6]. Another was the field of mendelian genetics, then led in the USSR by Nikolai Vavilov in his role as head of the Lenin All Union Academy of Agricultural Sciences, the scientific branch of the Commissariat of Agriculture. Vavilov was a botanist and a scientific pioneer in using genetic approaches to improve cereal crops for the USSR [6–8]. Ultimately, Vavilov came under attack by Trofim Lysenko, a peasant with no doctoral degree who popularized and laid claims to the concept of “vernalization” [6]. Lysenko and his colleagues proposed moistening and chilling winter wheat and allowing it to germinate in order to sense these conditions in time for the spring when it would supposedly flourish [6]. Through vernalization—which bore some resemblance to Lamarckian evolutionary theories by claiming that acquired traits could be inherited—Lysenko aspired to adapt wheat to the harsh Russian climate. As a sort of proof of concept, he had his father soak his winter wheat in water before burying it in a snowbank to keep it cold prior to spring planting [6].Initially, Vavilov took on a mentoring role for Lysenko, even touting his accomplishments at the Sixth International Congress of Genetics held at Cornell University in Ithaca, New York, in the summer of 1932 [8]. See, for example, Vavilov’s praise of Lysenko in a special news “flash,” as it was called by R. C. Cook, the editor of the Journal of Heredity during the 1940s [8]:
The remarkable discovery recently made by T. D. Lysenko of Odessa opens enormous new possibilities to plant breeders and plant geneticists of mastering individual variation.. . .The essence of these methods, which are specific for different plants and different variety groups, consists in the action upon the seeds of definite combinations of darkness (photo-periodism), temperature and humidity. This discovery enables us to utilize in our climate for breeding and genetic work tropical and sub-tropical varieties.... This creates the possibility of widening the scope of breeding. . . to an unprecedented extent, allowing the crossing of varieties requiring entirely different periods of vegetation.Lysenko’s vernalization technology would theoretically make it possible, argues Simon Ings in his book, Stalin and the Scientists, “to grow alligator pears and Bananas in New York and lemons in New England” [6]. Its extraordinary claims aside, vernalization was seen as a form of Soviet homegrown science and a source of national pride. In contrast, Lysenko was able to convince Stalin that genetics was an evil science, much like relativity. Political expediency became the rationale for promoting pseudoscience even if it meant that millions of rural peasants would die of starvation in the USSR when Lysenko’s cold-resistant crops failed to materialize. Ultimately, Lysenko became the President of the Lenin Academy of Agricultural Science in 1939, whereas Vavilov was arrested in 1940 and rounded up with other intellectuals, including the founder of the Marx-Lenin Institute of World Literature. He was interrogated and sent to a Soviet prison in Saratov where he perished, possibly by starvation in January 1943, despite repeated appeals from international leaders including British Prime Minister, Winston Churchill (Fig 1) [6].Open in a separate windowFig 1Photo of the prisoner Nikolai Vavilov.Official photo from the file of the investigation. The People''s Commissariat for Internal Affairs (Народный комиссариат внутренних дел), Central Archive of the Federal Security Service of the Russian Federation (Moscow) (Центральный архив ФСБ РФ (Москва)) Institute of Plant Industry (Всероссийский институт растениеводства имени Н. И. Вавилова), created January 1, 1942. https://en.wikipedia.org/wiki/Nikolai_Vavilov#/media/File:Vavilov_in_prison.jpg.Vavilov received a posthumous pardon by Nikita Khrushchev during the 1950s, and in 2008, a book about his life, The Murder of Nikolai Vavilov: The Story of Stalin''s Persecution of One of the Great Scientists of the Twentieth Century was published in English [9]. It remains a great irony that Vavilov devoted his scientific career to the humanitarian cause of feeding the population of the Soviet Union only to die by starvation.Following the death of Stalin in 1953, the USSR began reopening to international science, ushering in a new era in vaccine development. Throughout the 1950s, both the US and Soviet Union suffered from severe polio epidemics prompting the 2 nations to embark on an unprecedented scientific collaboration [10]. Dr. Albert Sabin sent his polio strains to the USSR where they were manufactured at large scale to produce a trivalent vaccine. During the “Khrushchev Thaw,” it was tested in tens of millions of Soviet citizens and shown to be both safe and effective at preventing polio. A decade later, the US and USSR collaborated to improve a vaccine leading to the eradication of smallpox [10]. Nonetheless, state oppression of Soviet scientists continued, and Krushchev supported Lysenko’s work. Moreover, the physicist and father of the Soviet hydrogen bomb, Andrei Sakharov, won the Nobel Prize in 1975 advocating for human rights, but was subsequently arrested and exiled to Gorky [11]. The mathematician and chess champion, Natan Sharansky, was arrested on treason charges in 1977 and kept in solitary confinement before he was released through a prisoner exchange, later emigrating to Israel in 1980. The American physicist Robert Oppenheimer also endured persecution during the red scare in the 1950s, though on a lesser scale, having had his national security clearance revoked. 相似文献
5.
The hippocampus has unique access to neuronal activity across all of the neocortex. Yet an unanswered question is how the transfer of information between these structures is gated. One hypothesis involves temporal-locking of activity in the neocortex with that in the hippocampus. New data from the Matthew E. Diamond laboratory shows that the rhythmic neuronal activity that accompanies vibrissa-based sensation, in rats, transiently locks to ongoing hippocampal θ-rhythmic activity during the sensory-gathering epoch of a discrimination task. This result complements past studies on the locking of sniffing and the θ-rhythm as well as the relation of sniffing and whisking. An overarching possibility is that the preBötzinger inspiration oscillator, which paces whisking, can selectively lock with the θ-rhythm to traffic sensorimotor information between the rat’s neocortex and hippocampus.The hippocampus lies along the margins of the cortical mantle and has unique access to neuronal activity across all of the neocortex. From a functional perspective, the hippocampus forms the apex of neuronal processing in mammals and is a key element in the short-term working memory, where neuronal signals persist for tens of seconds, that is independent of the frontal cortex (reviewed in [1,2]). Sensory information from multiple modalities is highly transformed as it passes from primary and higher-order sensory areas to the hippocampus. Several anatomically defined regions that lie within the temporal lobe take part in this transformation, all of which involve circuits with extensive recurrent feedback connections (reviewed in [3]) (Fig 1). This circuit motif is reminiscent of the pattern of connectivity within models of associative neuronal networks, whose dynamics lead to the clustering of neuronal inputs to form a reduced set of abstract representations [4] (reviewed in [5]). The first way station in the temporal lobe contains the postrhinal and perirhinal cortices, followed by the medial and lateral entorhinal cortices. Of note, olfactory input—which, unlike other senses, has no spatial component to its representation—has direct input to the lateral entorhinal cortex [6]. The third structure is the hippocampus, which contains multiple substructures (Fig 1).Open in a separate windowFig 1Schematic view of the circuitry of the temporal lobe and its connections to other brain areas of relevance.Figure abstracted from published results [7–15]. Composite illustration by Julia Kuhl.The specific nature of signal transformation and neuronal computations within the hippocampus is largely an open issue that defines the agenda of a great many laboratories. Equally vexing is the nature of signal transformation as the output leaves the hippocampus and propagates back to regions in the neocortex (Fig 1)—including the medial prefrontal cortex, a site of sensory integration and decision-making—in order to influence perception and motor action. The current experimental data suggest that only some signals within the sensory stream propagate into and out of the hippocampus. What regulates communication with the hippocampus or, more generally, with structures within the temporal lobe? The results from studies in rats and mice suggest that the most parsimonious hypothesis, at least for rodents, involves the rhythmic nature of neuronal activity at the so-called θ-rhythm [16], a 5–10 Hz oscillation (reviewed in [17]). The origin of the rhythm is not readily localized to a single locus [10], but certainly involves input from the medial septum [17] (a member of the forebrain cholinergic system) as well as from the supramammillary nucleus [10,18] (a member of the hypothalamus). The medial septum projects broadly to targets in the hippocampus and entorhinal cortex (Fig 1) [10]. Many motor actions, such as the orofacial actions of sniffing, whisking, and licking, occur within the frequency range of the θ-rhythm [19,20]. Thus, sensory input that is modulated by rhythmic self-motion can, in principle, phase-lock with hippocampal activity at the θ-rhythm to ensure the coherent trafficking of information between the relevant neocortical regions and temporal lobe structures [21–23].We now shift to the nature of orofacial sensory inputs, specifically whisking and sniffing, which are believed to dominate the world view of rodents [19]. Recent work identified a premotor nucleus in the ventral medulla, named the vibrissa region of the intermediate reticular zone, whose oscillatory output is necessary and sufficient to drive rhythmic whisking [24]. While whisking can occur independently of breathing, sniffing and whisking are synchronized in the curious and aroused animal [24,25], as the preBötzinger complex in the medulla [26]—the oscillator for inspiration—paces whisking at nominally 5–10 Hz through collateral projections [27]. Thus, for the purposes of reviewing evidence for the locking of orofacial sensory inputs to the hippocampal θ-rhythm, we confine our analysis to aroused animals that function with effectively a single sniff/whisk oscillator [28].What is the evidence for the locking of somatosensory signaling by the vibrissae to the hippocampal θ-rhythm? The first suggestion of phase locking between whisking and the θ-rhythm was based on a small sample size [29,30], which allowed for the possibility of spurious correlations. Phase locking was subsequently reexamined, using a relatively large dataset of 2 s whisking epochs, across many animals, as animals whisked in air [31]. The authors concluded that while whisking and the θ-rhythm share the same spectral band, their phases drift incoherently. Yet the possibility remained that phase locking could occur during special intervals, such as when a rat learns to discriminate an object with its vibrissae or when it performs a memory-based task. This set the stage for a further reexamination of this issue across different epochs in a rewarded task. Work from Diamond''s laboratory that is published in the current issue of PLOS Biology addresses just this point in a well-crafted experiment that involves rats trained to perform a discrimination task.Grion, Akrami, Zuo, Stella, and Diamond [32] trained rats to discriminate between two different textures with their vibrissae. The animals were rewarded if they turned to a water port on the side that was paired with a particular texture. Concurrent with this task, the investigators also recorded the local field potential in the hippocampus (from which they extracted the θ-rhythm), the position of the vibrissae (from which they extracted the evolution of phase in the whisk cycle), and the spiking of units in the vibrissa primary sensory cortex. Their first new finding is a substantial increase in the amplitude of the hippocampal field potential at the θ-rhythm frequency—approximately 10 Hz for the data of Fig 2A—during the two, approximately 0.5 s epochs when the animal approaches the textures and whisks against it. There is significant phase locking between whisking and the hippocampal θ-rhythm during both of these epochs (Fig 2B), as compared to a null hypothesis of whisking while the animal whisked in air outside the discrimination zone. Unfortunately, the coherence between whisking and the hippocampal θ-rhythm could not be ascertained during the decision, i.e., turn and reward epochs. Nonetheless, these data show that the coherence between whisking and the hippocampal θ-rhythm is closely aligned to epochs of active information gathering.Open in a separate windowFig 2Summary of findings on the θ-rhythm in a rat during a texture discrimination task, derived from reference [32].
(A) Spectrogram showing the change in spectral power of the local field potential in the hippocampal area CA1 before, during, and after a whisking-based discrimination task. (B) Summary index of the increase in coherence between the band-limited hippocampal θ-rhythm and whisking signals during approach of the rat to the stimulus and subsequent touch. The index reports , where ɸH and ɸW are the instantaneous phase of the hippocampal and whisking signals, respectively, and averaging is over all trials and animals. (C) Summary indices of the increase in coherence between the band-limited hippocampal θ-rhythm and the spiking signal in the vibrissa primary sensory cortex (“barrel cortex”). The magnitude of the index for each neuron is plotted versus phase in the θ-rhythm. The arrows show the concentration of units around the mean phase—black arrows for the vector average across only neurons with significant phase locking (solid circles) and gray arrows for the vector average across all neurons (open and closed circles). The concurrent positions of the vibrissae are indicated. The vector average is statistically significant only for the approach (p < 0.0001) and touch (p = 0.04) epochs.The second finding by Grion, Akrami, Zuo, Stella, and Diamond [32] addresses the relationship between spiking activity in the vibrissa primary sensory cortex and the hippocampal θ-rhythm. The authors find that spiking is essentially independent of the θ-rhythm outside of the task (foraging in Fig 2C), similar to the result for whisking and the θ-rhythm (Fig 2B). They observe strong coherence between spiking and the θ-rhythm during the 0.5 s epoch when the animal approaches the textures (approach in Fig 2C), yet reduced (but still significant) coherence during the touch epoch (touch in Fig 2C). The latter result is somewhat surprising, given past work from a number of laboratories that observe spiking in the primary sensory cortex and whisking to be weakly yet significantly phase-locked during exploratory whisking [33–37]. Perhaps overtraining leads to only a modest need for the transfer of sensory information to the hippocampus. Nonetheless, these data establish that phase locking of hippocampal and sensory cortical activity is essentially confined to the epoch of sensory gathering.Given the recent finding of a one-to-one locking of whisking and sniffing [24], we expect to find direct evidence for the phase locking of sniffing and the θ-rhythm. Early work indeed reported such phase locking [38] but, as in the case of whisking [29], this may have been a consequence of too small a sample and, thus, inadequate statistical power. However, Macrides, Eichenbaum, and Forbes [39] reexamined the relationship between sniffing and the hippocampal θ-rhythm before, during, and after animals sampled an odorant in a forced-choice task. They found evidence that the two rhythms phase-lock within approximately one second of the sampling epoch. We interpret this locking to be similar to that seen in the study by Diamond and colleagues (Fig 2B) [32]. All told, the combined data for sniffing and whisking by the aroused rodent, as accumulated across multiple laboratories, suggest that two oscillatory circuits—the supramammillary nucleus and medial septum complex that drives the hippocampal θ-rhythm and the preBötzinger complex that drives inspiration and paces the whisking oscillator during sniffing (Fig 1)—can phase-lock during epochs of gathering sensory information and likely sustain working memory.What anatomical pathway can lead to phase locking of these two oscillators? The electrophysiological study of Tsanov, Chah, Reilly, and O’Mara [9] supports a pathway from the medial septum, which is driven by the supramammillary nucleus, to dorsal pontine nuclei in the brainstem. The pontine nucleus projects to respiratory nuclei and, ultimately, the preBötzinger oscillator (Fig 1). This unidirectional pathway can, in principle, entrain breathing and whisking. Phase locking is not expected to occur during periods of basal breathing, when the breathing rate and θ-rhythm occur at highly incommensurate frequencies. However, it remains unclear why phase locking occurs only during a selected epoch of a discrimination task, whereas breathing and the θ-rhythm occupy the same frequency band during the epochs of approach, as well as touch-based target selection (Fig 2A). While a reafferent pathway provides the rat with information on self-motion of the vibrissae (Fig 1), it is currently unknown whether that information provides feedback for phase locking.A seeming requirement for effective communication between neocortical and hippocampal processing is that phase locking must be achieved at all possible phases of the θ-rhythm. Can multiple phase differences between sensory signals and the hippocampal θ-rhythm be accommodated? Two studies report that the θ-rhythm undergoes a systematic phase-shift along the dorsal–ventral axis of the hippocampus [40,41], although the full extent of this shift is only π radians [41]. In addition, past work shows that vibrissa input during whisking is represented among all phases of the sniff/whisk cycle, at levels from primary sensory neurons [42,43] through thalamus [44,45] and neocortex [33–37], with a bias toward retraction from the protracted position. A similar spread in phase occurs for olfactory input, as observed at the levels of the olfactory bulb [46] and cortex [47]. Thus, in principle, the hippocampus can receive, transform, and output sensory signals that arise over all possible phases in the sniff/whisk cycle. In this regard, two signals that are exactly out-of-phase by π radians can phase-lock as readily as signals that are in-phase.What are the constraints for phase locking to occur within the observed texture identification epochs? For a linear system, the time to lock between an external input and hippocampal theta depends on the observed spread in the spectrum of the θ-rhythm. This is estimated as Δf ~3 Hz (half-width at half-maximum amplitude), implying a locking time on the order of 1/Δf ~0.3 s. This is consistent with the approximate one second of enhanced θ-rhythm activity observed in the study by Diamond and colleagues (Fig 2A) [32] and in prior work [39,48] during a forced-choice task with rodents.Does the θ-rhythm also play a role in the gating of output from the hippocampus to areas of the neocortex? Siapas, Lubenov, and Wilson [48] provided evidence that hippocampal θ-rhythm phase-locks to electrical activity in the medial prefrontal cortex, a site of sensory integration as well as decision-making. Subsequent work [49–51] showed that the hippocampus drives the prefrontal cortex, consistent with the known unidirectional connectivity between Cornu Ammonis area 1 (CA1) of the hippocampus and the prefrontal cortex [11] (Fig 1). Further, phase locking of hippocampal and prefrontal cortical activity is largely confined to the epoch of decision-making, as opposed to the epoch of sensory gathering. Thus, over the course of approximately one second, sensory information flows into and then out of the hippocampus, gated by phase coherence between rhythmic neocortical and hippocampal neuronal activity.It is of interest that the medial prefrontal cortex receives input signals from sensory areas in the neocortex [52] as well as a transformed version of these input signals via the hippocampus (Fig 1). Yet it remains to be determined if this constitutes a viable hub for the comparison of the original and transformed signals. In particular, projections to the medial prefrontal cortex arise from the ventral hippocampus [2], while studies on the phase locking of hippocampal θ-rhythm to prefrontal neocortical activity were conducted in dorsal hippocampus, where the strength of the θ-rhythm is strong compared to the ventral end [53]. Therefore, similar recordings need to be performed in the ventral hippocampus. An intriguing possibility is that the continuous phase-shift of the θ-rhythm along the dorsal to the ventral axis of the hippocampus [40,41] provides a means to encode the arrival of novel inputs from multiple sensory modalities relative to a common clock.A final issue concerns the locking between sensory signals and hippocampal neuronal activity in species that do not exhibit a continuous θ-rhythm, with particular reference to bats [54–56] and primates [57–60]. One possibility is that only the up and down swings of neuronal activity about a mean are important, as opposed to the rhythm per se. In fact, for animals in which orofacial input plays a relatively minor role compared to rodents, such a scheme of clocked yet arrhythmic input may be a necessity. In this case, the window of processing is set by a stochastic interval between transitions, as opposed to the periodicity of the θ-rhythm. This may imply that up/down swings of neuronal activity may drive hippocampal–neocortical communications in all species, with communication mediated via phase-locked oscillators in rodents and via synchronous fluctuations in bats and primates. The validity of this scheme and its potential consequence on neuronal computation remains an open issue and a focus of ongoing research. 相似文献
6.
Simon Robin Evans 《PLoS biology》2016,14(4)
It was recently proposed that long-term population studies be exempted from the expectation that authors publicly archive the primary data underlying published articles. Such studies are valuable to many areas of ecological and evolutionary biological research, and multiple risks to their viability were anticipated as a result of public data archiving (PDA), ultimately all stemming from independent reuse of archived data. However, empirical assessment was missing, making it difficult to determine whether such fears are realistic. I addressed this by surveying data packages from long-term population studies archived in the Dryad Digital Repository. I found no evidence that PDA results in reuse of data by independent parties, suggesting the purported costs of PDA for long-term population studies have been overstated.Data are the foundation of the scientific method, yet individual scientists are evaluated via novel analyses of data, generating a potential conflict of interest between a research field and its individual participants that is manifested in the debate over access to the primary data underpinning published studies [1–5]. This is a chronic issue but has become more acute with the growing expectation that researchers publish the primary data underlying research reports (i.e., public data archiving [PDA]). Studies show that articles publishing their primary data are more reliable and accrue more citations [6,7], but a recent opinion piece by Mills et al. [2] highlighted the particular concerns felt by some principal investigators (PIs) of long-term population studies regarding PDA, arguing that unique aspects of such studies render them unsuitable for PDA. The "potential costs to science" identified by Mills et al. [2] as arising from PDA are as follows:
- Publication of flawed research resulting from a "lack of understanding" by independent researchers conducting analyses of archived data
- Time demands placed on the PIs of long-term population studies arising from the need to correct such errors via, e.g., published rebuttals
- Reduced opportunities for researchers to obtain the skills needed for field-based data collection because equivalent long-term population studies will be rendered redundant
- Reduced number of collaborations
- Inefficiencies resulting from repeated assessment of a hypothesis using a single dataset
7.
This Formal Comment provides clarifications on the authors’ recent estimates of global bacterial diversity and the current status of the field, and responds to a Formal Comment from John Wiens regarding their prior work.We welcome Wiens’ efforts to estimate global animal-associated bacterial richness and thank him for highlighting points of confusion and potential caveats in our previous work on the topic [1]. We find Wiens’ ideas worthy of consideration, as most of them represent a step in the right direction, and we encourage lively scientific discourse for the advancement of knowledge. Time will ultimately reveal which estimates, and underlying assumptions, came closest to the true bacterial richness; we are excited and confident that this will happen in the near future thanks to rapidly increasing sequencing capabilities. Here, we provide some clarifications on our work, its relation to Wiens’ estimates, and the current status of the field.First, Wiens states that we excluded animal-associated bacterial species in our global estimates. However, thousands of animal-associated samples were included in our analysis, and this was clearly stated in our main text (second paragraph on page 3).Second, Wiens’ commentary focuses on “S1 Text” of our paper [1], which was rather peripheral, and, hence, in the Supporting information. S1 Text [1] critically evaluated the rationale underlying previous estimates of global bacterial operational taxonomic unit (OTU) richness by Larsen and colleagues [2], but the results of S1 Text [1] did not in any way flow into the analyses presented in our main article. Indeed, our estimates of global bacterial (and archaeal) richness, discussed in our main article, are based on 7 alternative well-established estimation methods founded on concrete statistical models, each developed specifically for richness estimates from multiple survey data. We applied these methods to >34,000 samples from >490 studies including from, but not restricted to, animal microbiomes, to arrive at our global estimates, independently of the discussion in S1 Text [1].Third, Wiens’ commentary can yield the impression that we proposed that there are only 40,100 animal-associated bacterial OTUs and that Cephalotes in particular only have 40 associated bacterial OTUs. However, these numbers, mentioned in our S1 Text [1], were not meant to be taken as proposed point estimates for animal-associated OTU richness, and we believe that this was clear from our text. Instead, these numbers were meant as examples to demonstrate how strongly the estimates of animal-associated bacterial richness by Larsen and colleagues [2] would decrease simply by (a) using better justified mathematical formulas, i.e., with the same input data as used by Larsen and colleagues [2] but founded on an actual statistical model; (b) accounting for even minor overlaps in the OTUs associated with different animal genera; and/or (c) using alternative animal diversity estimates published by others [3], rather than those proposed by Larsen and colleagues [2]. Specifically, regarding (b), Larsen and colleagues [2] (pages 233 and 259) performed pairwise host species comparisons within various insect genera (for example, within the Cephalotes) to estimate on average how many bacterial OTUs were unique to each host species, then multiplied that estimate with their estimated number of animal species to determine the global animal-associated bacterial richness. However, since their pairwise host species comparisons were restricted to congeneric species, their estimated number of unique OTUs per host species does not account for potential overlaps between different host genera. Indeed, even if an OTU is only found “in one” Cephalotes species, it might not be truly unique to that host species if it is also present in members of other host genera. To clarify, we did not claim that all animal genera can share bacterial OTUs, but instead considered the implications of some average microbiome overlap (some animal genera might share no bacteria, and other genera might share a lot). The average microbiome overlap of 0.1% (when clustering bacterial 16S sequences into OTUs at 97% similarity) between animal genera used in our illustrative example in S1 Text [1] is of course speculative, but it is not unreasonable (see our next point). A zero overlap (implicitly assumed by Larsen and colleagues [2]) is almost certainly wrong. One goal of our S1 Text [1] was to point out the dramatic effects of such overlaps on animal-associated bacterial richness estimates using “basic” mathematical arguments.Fourth, Wiens’ commentary could yield the impression that existing data are able to tell us with sufficient certainty when a bacterial OTU is “unique” to a specific animal taxon. However, so far, the microbiomes of only a minuscule fraction of animal species have been surveyed. One can thus certainly not exclude the possibility that many bacterial OTUs currently thought to be “unique” to a certain animal taxon are eventually also found in other (potentially distantly related) animal taxa, for example, due to similar host diets and or environmental conditions [4–7]. As a case in point, many bacteria in herbivorous fish guts were found to be closely related to bacteria in mammals [8], and Song and colleagues [6] report that bat microbiomes closely resemble those of birds. The gut microbiome of caterpillars consists mostly of dietary and environmental bacteria and is not species specific [4]. Even in animal taxa with characteristic microbiota, there is a documented overlap across host species and genera. For example, there are a small number of bacteria consistently and specifically associated with bees, but these are found across bee genera at the level of the 99.5% similar 16S rRNA OTUs [5]. To further illustrate that an average microbiome overlap between animal taxa at least as large as the one considered in our S1 Text (0.1%) [1] is not unreasonable, we analyzed 16S rRNA sequences from the Earth Microbiome Project [6,9] and measured the overlap of microbiota originating from individuals of different animal taxa. We found that, on average, 2 individuals from different host classes (e.g., 1 mammalian and 1 avian sample) share 1.26% of their OTUs (16S clustered at 100% similarity), and 2 individuals from different host genera belonging to the same class (e.g., 2 mammalian samples) share 2.84% of their OTUs (methods in S1 Text of this response). A coarser OTU threshold (e.g., 97% similarity, considered in our original paper [1]) would further increase these average overlaps. While less is known about insect microbiomes, there is currently little reason to expect a drastically different picture there, and, as explained in our S1 Text [1], even a small average microbiome overlap of 0.1% between host genera would strongly limit total bacterial richness estimates. The fact that the accumulation curve of detected bacterial OTUs over sampled insect species does not yet strongly level off says little about where the accumulation curve would asymptotically converge; rigorous statistical methods, such as the ones used for our global estimates [1], would be needed to estimate this asymptote.Lastly, we stress that while the present conversation (including previous estimates by Louca and colleagues [1], Larsen and colleagues [2], Locey and colleagues [10], Wiens’ commentary, and this response) focuses on 16S rRNA OTUs, it may well be that at finer phylogenetic resolutions, e.g., at bacterial strain level, host specificity and bacterial richness are substantially higher. In particular, future whole-genome sequencing surveys may well reveal the existence of far more genomic clusters and ecotypes than 16S-based OTUs. 相似文献
8.
9.
An intricate network of innate and immune cells and their derived mediators function in unison to protect us from toxic elements and infectious microbial diseases that are encountered in our environment. This vast network operates efficiently by use of a single cell epithelium in, for example, the gastrointestinal (GI) and upper respiratory (UR) tracts, fortified by adjoining cells and lymphoid tissues that protect its integrity. Perturbations certainly occur, sometimes resulting in inflammatory diseases or infections that can be debilitating and life threatening. For example, allergies in the eyes, skin, nose, and the UR or digestive tracts are common. Likewise, genetic background and environmental microbial encounters can lead to inflammatory bowel diseases (IBDs). This mucosal immune system (MIS) in both health and disease is currently under intense investigation worldwide by scientists with diverse expertise and interests. Despite this activity, there are numerous questions remaining that will require detailed answers in order to use the MIS to our advantage. In this issue of PLOS Biology, a research article describes a multi-scale in vivo systems approach to determine precisely how the gut epithelium responds to an inflammatory cytokine, tumor necrosis factor-alpha (TNF-α), given by the intravenous route. This article reveals a previously unknown pathway in which several cell types and their secreted mediators work in unison to prevent epithelial cell death in the mouse small intestine. The results of this interesting study illustrate how in vivo systems biology approaches can be used to unravel the complex mechanisms used to protect the host from its environment.Higher mammals have evolved a unique mucosal immune system (MIS) in order to protect the vast surfaces bathed by external secretions (which may exceed 300 m2 in humans) that are exposed to a rather harsh environment. The first view of the MIS is a single-layer epithelium covered by mucus and antimicrobial products and fortified by both innate and adaptive components of host defense (Figure 1). To this, we can add a natural microbiota that lives in different niches, i.e., the distal small intestine and colon, the skin, the nasal and oral cavities, and the female reproductive tract. The largest microbial population can reach ∼1012 bacteria/cm3 and occurs in the human large intestine [1]–[3]. This large intestinal microbiota includes over 1,000 bacterial species and the individual composition varies from person-to-person. Other epithelial sites harbor a separate type of microbiota, including the mouth, nose, skin, and other wet mucosal surfaces, that contributes to the host; in turn, the host benefits its microbial co-inhabitants. Gut bacteria grow by digesting complex carbohydrates, proteins, vitamins, and other components for absorption by the host, which in return rewards the microbiota by developing a natural immunity and tolerance (reviewed in [4]–[7]). Finally, the host microbiota influences the development and maturation of cells within lymphoid tissues of the MIS [8],[9].Open in a separate windowFigure 1The gut, nasal, upper respiratory and salivary, mammary, lacrimal, and other glands consist of a single layered epithelium.Projections of villi in the GI tract consist mainly of columnar epithelial cells (ECs), with other types including goblet and Paneth cells. Goblet cells exhibit several functions including secretion of mucins, which form a thick mucus covering. Paneth cells secrete chemokines, cytokines, and anti-microbial peptides (AMPs) termed α-defensins.Mucosal epithelial cells (ECs) are of central importance in host defense by providing both a physical barrier and innate immunity. For example, goblet cells secrete mucus, which forms a dense, protective covering for the entire epithelium (Figure 1). Peristalsis initiated by the brush border of gastrointestinal (GI) tract ECs allows food contents to be continuously digested and absorbed as it passes through the gut. In the upper respiratory (UR) tract, ciliated ECs capture inhaled, potentially toxic particles, and their beating moves them upward to expel them, thereby protecting the lungs. Damaged, infected, or apoptotic ECs in the GI tract move to the tips of villi and are excreted; newly formed ECs arise in the crypt region and continuously migrate upward. Paneth cells in crypt regions of the GI tract produce anti-microbial peptides (AMPs), or α-defensins, while ECs produce β-defensins [10],[11] for host protection (Figure 1). A major resident cell component of the mucosal epithelium are intraepithelial lymphocytes (IELs). The IELs consist of various T cell subsets that interact with ECs in order to maintain normal homeostasis [12]. Regulation is bi-directional, since ECs can also influence IEL T cell development and function [12]–[14].The MIS, simply speaking, can be separated into inductive and effector sites based upon their anatomical and functional properties. The migration of immune cells from mucosal inductive to effector tissues via the lymphatic system is the cellular basis for the immune response in the GI, the UR, and female reproductive tracts (Figure 2). Mucosal inductive sites include the gut-associated lymphoid tissues (GALT) and nasopharyngeal-associated lymphoid tissues (NALT), as well as less well characterized lymphoid sites (Box 1). Collectively, these comprise a mucosa-associated lymphoid tissue (MALT) network for the provision of a continuous source of memory B and T cells that then move to mucosal effector sites 13,14. The MALT contains T cell regions, B cell–enriched areas harboring a high frequency of surface IgA-positive (sIgA+) B cells, and a subepithelial area with antigen-presenting cells (APCs), including dendritic cells (DCs) for the initiation of specific immune responses (Figure 2). The MALT is covered by a subset of differentiated microfold (M) cells, ECs, but not goblet cells, and underlying lymphoid cells that play central roles in the initiation of mucosal immune responses. M cells take up antigens (Ags) from the lumen of the intestinal and nasal mucosa and transport them to the underlying DCs (Figure 2). The DCs carry Ags into the inductive sites of the Peyer''s patch or via draining lymphatics into the mesenteric lymph nodes (MLNs) for initiation of mucosal T and B cell responses (Figure 2). Retinoic acid (RA) producing DCs enhance the expression of mucosal homing receptors (α4β7 and CCR9) on activated T cells for subsequent migration through the lymphatics, the bloodstream, and into the GI tract lamina propria [15],[16]. Regulation within the MIS is critical; several T cell subsets including Th1, Th2, Th17, and Tregs serve this purpose [13],[14],[17] (Figure 2).Open in a separate windowFigure 2The mucosal immune system (MIS) is interconnected, enabling it to protect vast surface areas.This is accomplished by inductive sites of organized lymphoid tissues, e.g., in the gut the Peyer''s patches (PPs) and mesenteric lymph nodes (MLNs) comprise the GALT. Lumenal Ags can be easily sampled via M cells or by epithelial DCs since this surface is not covered by mucus due to an absence of goblet cells. Engested Ags in DCs trigger specific T and B cell responses in Peyer''s patches and MLNs. Homing of lymphocytes expressing specific receptors helps guide their eventual entry into major effector tissues, e.g., the lamina propria of the gut, the upper respiratory (UR) tract, the female reproductive tract, or acinar regions of exocrine glands. Terminal differentiation of plasma cells producing polymeric (mainly dimeric) IgA is then transported across ECs via the pIgR for subsequent release as S-IgA Abs.
Box 1. Major Inductive Sites for Mucosal Immune Responses
- GALT (gut-associated lymphoid tissues)
- Peyer''s patches (PPs)
- Mesenteric lymph nodes (MLNs)
- Isolated lymphoid follicles (ILFs)
- NALT (nasopharyngeal-associated lymphoid tissues)
- Tonsils/adenoids
- Inducible bronchus-associated lymphoid tissue (iBALT)
- Cervical lymph nodes (CLNs)
- Hilar lymph nodes (HLNs)
10.
11.
12.
Glenda E. Gray Fatima Laher Tanya Doherty Salim Abdool Karim Scott Hammer John Mascola Chris Beyrer Larry Corey 《PLoS biology》2016,14(3)
In the last 15 years, antiretroviral therapy (ART) has been the most globally impactful life-saving development of medical research. Antiretrovirals (ARVs) are used with great success for both the treatment and prevention of HIV infection. Despite these remarkable advances, this epidemic grows relentlessly worldwide. Over 2.1 million new infections occur each year, two-thirds in women and 240,000 in children. The widespread elimination of HIV will require the development of new, more potent prevention tools. Such efforts are imperative on a global scale. However, it must also be recognised that true containment of the epidemic requires the development and widespread implementation of a scientific advancement that has eluded us to date—a highly effective vaccine. Striving for such medical advances is what is required to achieve the end of AIDS.In the last 15 years, antiretroviral therapy (ART) has been the most globally impactful life-saving development of medical research. Antiretrovirals (ARVs) are used with great success for both the treatment and prevention of HIV infection. In the United States, the widespread implementation of combination ARVs led to the virtual eradication of mother-to-child transmission of HIV from 1,650 cases in 1991 to 110 cases in 2011, and a turnaround in AIDS deaths from an almost 100% five-year mortality rate to a five-year survival rate of 91% in HIV-infected adults [1]. Currently, the estimated average lifespan of an HIV-infected adult in the developed world is well over 40 years post-diagnosis. Survival rates in the developing world, although lower, are improving: in sub-Saharan Africa, AIDS deaths fell by 39% between 2005 and 2013, and the biggest decline, 51%, was seen in South Africa [2].Furthermore, the association between ART, viremia, and transmission has led to the concept of “test and treat,” with the hope of reducing community viral load by testing early and initiating treatment as soon as a diagnosis of HIV is made [3]. Indeed, selected regions of the world have begun to actualize the public health value of ARVs, from gains in life expectancy to impact on onward transmission, with a potential 1% decline in new infections for every 10% increase in treatment coverage [2]. In September 2015, WHO released new guidelines removing all limitations on eligibility for ART among people living with HIV and recommending pre-exposure prophylaxis (PrEP) to population groups at significant HIV risk, paving the way for a global onslaught on HIV [4].Despite these remarkable advances, this epidemic grows relentlessly worldwide. Over 2.1 million new infections occur each year, two-thirds in women and 240,000 in children [2]. In heavily affected countries, HIV infection rates have only stabilized at best: the annualized acquisition rates in persons in their first decade of sexual activity average 3%–5% yearly in southern Africa [5–7]. These figures are hardly compatible with the international health community’s stated goal of an “AIDS-free generation” [8,9]. In highly resourced settings, microepidemics of HIV still occur, particularly among gays, bisexuals, and men who have sex with men (MSM) [10]. HIV epidemics are expanding in two geographic regions in 2015—the Middle East/North Africa and Eastern Europe/Central Asia—largely due to challenges in implementing evidence-based HIV policies and programmes [2]. Even for the past decade in the US, almost 50,000 new cases recorded annually, two-thirds among MSM, has been a stable figure for years and shows no evidence of declining [1].While treatment scale-up, medical male circumcision [11], and the implementation of strategies to prevent mother-to-child transmission [12] have received global traction, systemic or topical ARV-based biomedical advances to prevent sexual acquisition of HIV have, as yet, made limited impressions on a population basis, despite their reported efficacy. Factors such as their adherence requirements, cost, potential for drug resistance, and long-term feasibility have restricted the appetite for implementation, even though these approaches may reduce HIV incidence in select populations.Already, several trials have shown that daily oral administration of the ARV tenofovir disoproxil fumarate (TDF), taken singly or in combination with emtricitabine, as PrEP by HIV-uninfected individuals, reduces HIV acquisition among serodiscordant couples (where one partner is HIV-positive and the other is HIV-negative) [13], MSM [14], at-risk men and women [15], and people who inject drugs [16,17] by between 44% and 75%. Long-acting injectable antiretroviral agents such as rilpivirine and cabotegravir, administered every two and three months, respectively, are also being developed for PrEP. All of these PrEP approaches are dependent on repeated HIV testing and adherence to drug regimens, which may challenge effectiveness in some populations and contexts.The widespread elimination of HIV will require the development of new, more potent prevention tools. Because HIV acquisition occurs subclinically, the elimination of HIV on a population basis will require a highly effective vaccine. Alternatively, if vaccine development is delayed, supplementary strategies may include long-acting pre-exposure antiretroviral cocktails and/or the administration of neutralizing antibodies through long-lasting parenteral preparations or the development of a “genetic immunization” delivery system, as well as scaling up delivery of highly effective regimens to eliminate mother-to-child HIV transmission (Fig 1).Open in a separate windowFig 1Medical interventions required to end the epidemic of HIV.Image credit: Glenda Gray. 相似文献
13.
Andrew Balmford Jonathan M. H. Green Michael Anderson James Beresford Charles Huang Robin Naidoo Matt Walpole Andrea Manica 《PLoS biology》2015,13(2)
How often do people visit the world’s protected areas (PAs)? Despite PAs covering one-eighth of the land and being a major focus of nature-based recreation and tourism, we don’t know. To address this, we compiled a globally-representative database of visits to PAs and built region-specific models predicting visit rates from PA size, local population size, remoteness, natural attractiveness, and national income. Applying these models to all but the very smallest of the world’s terrestrial PAs suggests that together they receive roughly 8 billion (8 x 109) visits/y—of which more than 80% are in Europe and North America. Linking our region-specific visit estimates to valuation studies indicates that these visits generate approximately US $600 billion/y in direct in-country expenditure and US $250 billion/y in consumer surplus. These figures dwarf current, typically inadequate spending on conserving PAs. Thus, even without considering the many other ecosystem services that PAs provide to people, our findings underscore calls for greatly increased investment in their conservation.Enjoyment of nature, much of it in protected areas (PAs), is recognised as the most prominent cultural ecosystem service [1–3], yet we still lack even a rough understanding of its global magnitude and economic significance. Large-scale assessments have been restricted to regional or biome-specific investigations [4–8] (but see [9]). There are good reasons for this. Information on visit rates is limited, widely scattered, and confounded by variation in methods [10,11]. Likewise, estimates of the value of visits vary greatly—geographically, among methods, and depending on the component of value being measured [12–14]. Until now, these problems have prevented data-driven analysis of the worldwide scale of nature-based recreation and tourism. But with almost all the world’s governments committed (through the Aichi Biodiversity Targets [15]) to integrating biodiversity into national accounts, policymakers require such gaps in our knowledge of natural capital to be filled.We tackled this shortfall in our understanding of a major ecosystem service by focusing on terrestrial PAs, which cover one-eighth of the land [16] and are a major focus of nature-based recreation and tourism. We compiled data on visit rates to over 500 PAs and built region-specific models, which predicted variation in visitation in relation to the properties of PAs and to local socioeconomic conditions. Next, we used these models to estimate visit rates to all but the smallest of the world’s terrestrial PAs. Last, by summing these estimates by region and combining the totals with region-specific medians for the value of nature visits obtained from the literature, we derived approximate estimates of the global extent and economic significance of PA visitation.Given the scarcity of data on visits to PAs, our approach was to use all available information (although we excluded marine and Antarctic sites, and International Union for Conservation of Nature (IUCN) Category I PAs where tourism is typically discouraged; for further details of data collection and analysis see Materials and Methods). This generated a database of visitor records for 556 PAs spread across 51 countries and included 2,663 records of annual visit numbers over our best-sampled ten-year period (1998–2007) (S1 Table). Mean annual visit rates for individual PAs in this sample ranged from zero to over 10 million visits/y, with a median across all sampled PAs of 20,333 visits/y.We explored this variation by modelling it in relation to a series of biophysical and socioeconomic variables that might plausibly predict visit rates (after refs [6,7,17]): PA size, local population size, PA remoteness, a simple measure of the attractiveness of the PA’s natural features, and national income (see Materials and Methods for a priori predictions). For each of five major regions, we performed univariate regressions (S2 Table) and then built generalised linear models (GLMs) in an effort to predict variation in observed visit rates. While the GLMs had modest explanatory power within regions (S3 Table), together they accounted for 52.9% of observed global variation in visit rates. Associations with individual GLM variables—controlling for the effects of other variables—differed regionally in their strength but broadly matched our predictions (S1 Fig.). Visit rates increased with local population size (in Europe), decreased with remoteness (everywhere apart from Asia/Australasia), increased with natural attractiveness (in North and Latin America), and increased with national income (everywhere else). Controlling for these variables, visit rates were highest in North America, lower in Asia/Australasia and Europe, and lowest in Africa and Latin America.To quantify how often people visit PAs as a whole, we used our region-specific GLMs to estimate visit rates to 94,238 sites listed in the World Database on Protected Areas (WDPA) [18]). We again excluded marine, Antarctic, and Category I PAs, as well as almost 40,000 extremely small sites which were below the size (10 ha) of the smallest PA in our sample (S2 Fig.). The limited power of our GLMs and significant errors in the WDPA mean our estimates of visit rates should be treated with caution for individual sites or (when aggregated to national level) for smaller countries. However, the larger-scale patterns they reveal are marked. Estimated median visit rates per PA (averaged within countries) are lowest in Africa (at around 3,000/y) and Latin America (4,000/y), and greatest in North America (350,000/y) (S3 Table). When visit rates are aggregated across all PAs within a country, pronounced regional differences in the numbers of PAs (with relatively few in Africa and Latin America) magnify these patterns and indicate that while many African countries have <100,000 PA visits/y, PAs in the United States receive a combined total of over 3 billion visits/y (Fig. 1). This variation is underscored when aggregate PA visit rates are standardised by the annual number of non-workdays and total population size of each region: across Europe we reckon there are ~5 PA visits/100 non-work person-days; for North America, the figure is ~10 visits/100 non-work person-days respectively, while for each other region our estimates are <0.3 visits/100 non-work person-days.Open in a separate windowFig 1Estimated total PA visit rates for each country.Totals (which are log10-transformed) were derived by applying the relevant regional GLM (S3 Table) to all of a country’s terrestrial PAs (excluding those <10 ha, and marine and IUCN Category I PAs) listed in the WDPA [18]. Asterisks show countries for which we had visit rate observations.Summing our aggregate estimates of PA visits suggests that between them, the world’s terrestrial PAs receive approximately 8 billion visits/y. Of these, we estimate 3.8 billion visits/y are in Europe (where more than half of the PAs in the WDPA are located) and 3.3 billion visits/y are in North America (S3 Table). These numbers are strikingly large. However, given our confidence intervals (95% CIs for the global total: 5.4–18.5 billion/y) and considering several conservative aspects of our calculations (e.g., the exclusion of ~40,000 very small sites and the incomplete nature of the WDPA), we consider it implausible that there are fewer than 5 billion PA visits worldwide each year. Three national estimates support this view: 2.5 billion visitdays/y to US PAs in 1996 [4], >1 billion visits/y (albeit many of them cultural rather than nature-based) to China’s National Parks in 2006 [19], and 3.2–3.9 billion visits/y to all British “ecosystems” (most of which are not in PAs) in 2010 [7].Finally, what can be inferred about the economic significance of visits on this scale? Economists working on tourism distinguish two main, non-overlapping components of value [12]: direct expenditure by visitors (an element of economic impact, calculated from spending on fees, travel, accommodation, etc.); and consumer surplus (a measure of economic value which arises because many visitors would be prepared to pay more for their visit than they actually have to, and which is defined as the difference between what visitors would be prepared to pay for a visit and what they actually spend; consumer surplus is typically quantified using travel cost or contingent valuation methods). We conducted an extensive literature search to derive median (but conservative) figures for each type of value for each region (S4 Table). Applying these to our corresponding estimates of visit rates and summing across regions yields an estimate of global gross direct expenditure associated with PA visits (within-country only, and excluding indirect and induced expenditure) of ~US $600 billion/y worldwide (at 2014 prices). The corresponding figure for global consumer surplus is ~US $250 billion/y.Such numbers are unavoidably imprecise. Uncertainty in our modelled visit rates and the wide variation in published estimates of expenditure and consumer surplus mean that they could be out by a factor of two or more. However, comparison with calculations that visits to North American PAs alone have an economic impact of $350–550 billion/y [4] and that direct expenditure on all travel and tourism worldwide runs at $2,000 billion/y [20] suggests our figures are of the correct order of magnitude, and that the value of PA visitation runs into hundreds of billions of dollars annually.These results quantify, we believe for the first time, the scale of visits to the world’s PAs and their approximate economic significance. We currently spend <$10 billion/y in safeguarding PAs [21]—a figure which is widely regarded as grossly insufficient [21–25]. Even without considering the many other benefits which PAs provide [22], our estimates of the economic impact and value of PA visitation dwarf current expenditure—highlighting the risks of underinvestment in conservation, and suggesting substantially increased investments in protected area maintenance and expansion would yield substantial returns. 相似文献
14.
Nicole M. Gerardo 《PLoS biology》2015,13(2)
Many organisms harbor microbial associates that have profound impacts on host traits. The phenotypic effect of symbionts on their hosts may include changes in development, reproduction, longevity, and defense against natural enemies. Determining the consequences of associating with a microbial symbiont requires experimental comparison of hosts with and without symbionts. Then, determining the mechanism by which symbionts alter these phenotypes can involve genomic, genetic, and evolutionary approaches; however, many host-associated symbionts are not amenable to genetic approaches that require cultivation of the microbe outside the host. In the current issue of PLOS Biology, Chrostek and Teixeira highlight an elegant approach to studying functional mechanisms of symbiont-conferred traits. They used directed experimental evolution to select for strains of Wolbachia wMelPop (a bacterial symbiont of fruit flies) that differed in copy number of a region of the genome suspected to underlie virulence. Copy number evolved rapidly when under selection, and wMelPop strains with more copies of the region shortened the lives of their Drosophila hosts more than symbionts with fewer copies. Interestingly, the wMelPop strains with more copies also increase host resistance to viruses compared to symbionts with fewer copies. Their study highlights the power of exploiting alternative approaches when elucidating the functional impacts of symbiotic associations.Symbioses, long-term and physically close interactions between two or more species, are central to the ecology and evolution of many organisms. Though “Symbiosis” is more often used to define interactions that are presumed to be mutually beneficial to a host and its microbial partner, a broader definition including both parasitic and mutualistic interactions recognizes that the fitness effects of many symbioses are complex and often context dependent. Whether an association is beneficial can depend on ecological conditions, and mutation and other evolutionary processes can result in symbiont strains that differ in terms of costs and benefits to hosts (Fig. 1).Open in a separate windowFig 1The symbiosis spectrum.The costs and benefits of symbiosis for hosts are not bimodal but span a continuum. The benefit to cost ratio is mediated both by environmental conditions and by the strain of symbiont. For example, the bacteria Hamiltonella defensa increases aphid resistance to parasitoid wasps. When Hamiltonella loses an associated bacteriophage, protection is lost. Also, in aphids, Buchnera aphidicola is a bacterial symbiont that provisions its hosts with critical nutritional resources. However, alterations of the heat shock promoter in Buchnera lessen the fitness benefit of symbiosis for the hosts under elevated temperatures. Amplification of a region of the Wolbachia genome known as Octomom causes the bacteria to shorten the lifespan of its Drosophila fly hosts.Elucidating the effects of host-associated microbes includes, when possible, experiments designed to assay host phenotypes when they do and do not have a particular symbiont of interest (Fig. 2). In systems in which hosts acquire symbionts from the environment, hosts can be reared in sterile conditions to prevent acquisition [1]. If symbionts are passed internally from mother to offspring, antibiotic treatments can sometimes be utilized to obtain lineages of hosts without symbionts [2]. The impacts of symbiont presence on survival, development, reproduction, and defense can be quantified, with the caveat that these impacts may be quite different under alternative environmental conditions. While such experiments are sometimes more tractable in systems with simple microbial consortia, the same experimental processes can be utilized in systems with more complex microbial communities [3,4].Open in a separate windowFig 2Approaches to functionally characterize symbiont effects.The first step in functionally characterizing the phenotypic impacts of a symbiont on its host is to measure phenotypes of hosts with and without symbionts. Any effects need to be considered in the light of how they are modified by environmental conditions. Understanding the mechanisms underlying symbiont alteration of host phenotype can involve, and often combines, genomic, genetic, and evolutionary approaches. Solid arrows indicate the path leading to results highlighted in Chrostek and Teixeira’s investigation of Wolbachia virulence in this issue of PLoS Biology.Once a fitness effect of symbiosis is ascertained, determining the mechanistic basis of this effect can be challenging. A genomics approach sometimes provides informative insight into microbial function. Sequencing of many insect-associated symbionts, for example, has confirmed the presence of genes necessary for amino acid and vitamin synthesis [5–8]. These genomic revelations, in some cases, can be linked to phenotypic effects of symbiosis for the hosts. For example, aphids reared in the absence of their obligate symbiotic bacteria, Buchnera aphidicola, can survive when provisioned with supplemental amino acids but cannot survive without supplementation, suggesting that Buchnera’s provisioning of amino acids is critical for host survival [9,10]. The Buchnera genome contains many of the genes necessary for amino acid synthesis [5].Linking genotype to phenotype, however, can be complicated. Experiments are necessary to functionally test the insights garnered from genome sequencing. For example, just because a symbiont has genes necessary for synthesis of a particular nutrient does not mean that the nutrient is being provisioned to its host. Furthermore, in many systems we do not know what genetic mechanisms are most likely to influence a symbiont-conferred phenotype. For example, if hosts associated with a given microbe have lower fitness than those without the microbe, what mechanism mediates this phenotype? Is it producing a toxin? Is it using too many host resources? In these cases, a single genome provides even less insight.Comparative genomics can be another approach. This requires collection of hosts with alternative symbiont strains and then testing these strains in a common host background to demonstrate that they have different phenotypic effects. Symbiont genomes can then be sequenced and compared to identify differences. This approach was utilized to compare genomes of strains of the aphid bacterial symbiont Regiella insecticola that confer different levels of resistance to parasitoid wasps [11]; the protective and nonprotective Regiella genome differed in many respects. Comparing the genomes of Wolbachia strains with differential impacts on fly host fitness [12,13] revealed fewer differences, though none involved a gene with a function known to impact host fitness. Comparative genomics rarely uncovers a holy grail as the genomes of symbiont strains with alternative phenotypic effects rarely differ at a single locus of known function.Another approach, which is at the heart of studies of microbial pathogens, is to use genetic tools to manipulate symbionts at candidate loci (or randomly through mutagenesis) and compare the phenotypic effects of genetically-manipulated and unmanipulated symbionts. Indeed, this approach has provided insights into genes underlying traits of both pathogenic [14] and beneficial [15,16] microbes. There is one challenge. Many host-associated symbionts are not cultivable outside of their hosts, which precludes utilization of most traditional genetic techniques used to modify microbial genomes.An alternative approach to studying symbiont function leverages evolution. Occasionally, lineages that once conferred some phenotypic effect, when tested later, no longer do. If symbiont samples were saved along the way, researchers can then determine what in the genome changed. For example, pea aphids (Acyrthosiphon pisum) harboring the bacteria Hamiltonella defensa are more resistant to parasitoid wasps than those without the bacteria [17,18]. Toxin-encoding genes identified in the genome of a Hamiltonella-associated bacteriophage were hypothesized to be central to this defense [18,19]. However, confirmation of the bacteriophage’s role required comparing the insects’ resistance to wasps when they harbored the same Hamiltonella with and without the phage. No Hamiltonella isolates were found in nature without the phage, but bottleneck passaging of the insects and symbionts generation after generation in the laboratory led to the loss of phage in multiple host lineages. Experimental assays confirmed that in the absence of phage, there was no protection [20]. Similarly, laboratory passaging of aphids and symbionts serendipitously led to spread of a mutation in the genome of Buchnera aphidicola, the primary, amino acid-synthesizing symbiont of pea aphids. The mutation, a single nucleotide deletion in the promoter for ibpA, a gene encoding for a heat-shock protein, lowers aphid fitness under elevated temperature conditions [21]. The mutation is found at low levels in natural aphid populations, suggesting that laboratory conditions facilitate maintenance of the genotype.In the above cases, evolution was a fortunate coincidence. In this issue of PLoS Biology, Chrostek and Teixeira (2014) illustrate another alternative, directed experimental evolution. Previous work demonstrated that a strain of the symbiotic bacterium Wolbachia, wMelPop, is virulent to its Drosophila melanogaster hosts, considerably shortening lifespan while overproliferating inside the flies [22]. To investigate the mechanism of virulence, researchers compared the genomic content of an avirulent Wolbachia strain to that of the virulent wMelPop [12,13]. These comparisons revealed that the wMelPop genome contains a region with eight genes that is amplified multiple times; in avirulent strains there is only a single copy. This eight gene region was nicknamed “Octomom.” To functionally test whether Octomom mediates Wolbachia virulence, over successive generations, Chrostek and Teixeira selected for females with either high or low Octomom copy numbers to start the next generations. They found that copy number could evolve rapidly and was correlated with virulence. Flies harboring wMelPop with more copies of Octomom had shorter lifespans. This cost was reversed in the presence of natural enemies; flies harboring wMelPop with more copies of Octomom had higher resistance to viral pathogens. Thus, selection provided a functional link between genotype and phenotype in a symbiont recalcitrant to traditional microbial genetics approaches.In many respects, this is similar to the research on aphids and their symbionts, where protective phenotypes were lost through passaging of aphids and symbionts generation after generation, as part of standard laboratory maintenance. Chrostek and Teixeira simply used the tools of experimental evolution to select for altered symbionts in a controlled fashion. Comparison of the studies also highlights two potential approaches—select for a phenotype and determine the genotypic change, or select for a genotype of interest and determine the phenotypic effect.Why do we need to know the genetic mechanisms underlying symbiont-conferred traits? In terms of evolutionary dynamics, the maintenance of a symbiont’s effect in a population is predicated on the likelihood of it being maintained in the presence of mutation, drift, and selection. Symbiosis research often considers how ecological conditions influence symbiont-conferred traits but less often considers the instability of those influences due to evolutionary change. From the perspective of applied applications to human concerns, symbiont alteration of insect phenotypes are potential mechanisms to reduce vectoring of human and agricultural pathogens, either through directly reducing insect fitness or reducing the capacity of vectors to serve as pathogen reservoirs [23–28]. Short term field trials, for example, have demonstrated spread and persistence of Wolbachia in mosquito populations [29,30]. Because Wolbachia reduce persistence of viruses, including human pathogens, in insects [26,31–33], this is a promising pesticide-free and drug-free control strategy for insect-vectored diseases. Can we assume that Wolbachia and other symbionts will always confer the same phenotypes to their hosts? If the conferred phenotype is based on a region of the genome where mutation is likely (e.g., the homopolymeric track within the heat shock promoter of aphid Buchnera, the Octomom region in Drosophila wMelPop), then we have clear reason to suspect that the genotypic and phenotypic makeup of the symbiont population could change over time. We need to investigate how populations of bacterial symbionts evolve in host populations under natural ecological conditions, carefully screening for both changes in phenotype and changes in genotype over the course of such experimental observations. We then need to incorporate evolutionary changes when modeling symbiont maintenance and when considering the use of symbionts in applied applications. 相似文献
15.
16.
Eduardo P. C. Rocha 《PLoS biology》2016,14(3)
The diversification of prokaryotes is accelerated by their ability to acquire DNA from other genomes. However, the underlying processes also facilitate genome infection by costly mobile genetic elements. The discovery that cells can uptake DNA by natural transformation was instrumental to the birth of molecular biology nearly a century ago. Surprisingly, a new study shows that this mechanism could efficiently cure the genome of mobile elements acquired through previous sexual exchanges.Horizontal gene transfer (HGT) is a key contributor to the genetic diversification of prokaryotes [1]. Its frequency in natural populations is very high, leading to species’ gene repertoires with relatively few ubiquitous (core) genes and many low-frequency genes (present in a small proportion of individuals). The latter are responsible for much of the phenotypic diversity observed in prokaryotic species and are often encoded in mobile genetic elements that spread between individual genomes as costly molecular parasites. Hence, HGT of interesting traits is often carried by expensive vehicles.The net fitness gain of horizontal gene transfer depends on the genetic background of the new host, the acquired traits, the fitness cost of the mobile element, and the ecological context [2]. A study published in this issue of PLOS Biology [3] proposes that a mechanism originally thought to favor the acquisition of novel DNA—natural transformation—might actually allow prokaryotes to clean their genome of mobile genetic elements.Natural transformation allows the uptake of environmental DNA into the cell (Fig 1). It differs markedly from the other major mechanisms of HGT by depending exclusively on the recipient cell, which controls the expression of the transformation machinery and favors exchanges with closely related taxa [4]. DNA arrives at the cytoplasm in the form of small single-stranded fragments. If it is not degraded, it may integrate the genome by homologous recombination at regions of high sequence similarity (Fig 1). This results in allelic exchange between a fraction of the chromosome and the foreign DNA. Depending on the recombination mechanisms operating in the cell and on the extent of sequence similarity between the transforming DNA and the genome, alternative recombination processes may take place. Nonhomologous DNA flanked by regions of high similarity can be integrated by double homologous recombination at the edges (Fig 1E). Mechanisms mixing homologous and illegitimate recombination require less strict sequence similarity and may also integrate nonhomologous DNA in the genome [5]. Some of these processes lead to small deletions of chromosomal DNA [6]. These alternative recombination pathways allow the bacterium to lose and/or acquire novel genetic information.Open in a separate windowFig 1Natural transformation and its outcomes.The mechanism of environmental DNA uptake brings into the cytoplasm small single-stranded DNA fragments (A). Earlier models for the raison d’être of natural transformation have focused on the role of DNA as a nutrient (B), as a breaker of genetic linkage (C), or as a substrate for DNA repair (D). The chromosomal curing model allows the removal of mobile elements by recombination between conserved sequences at their extremities (E). The model is strongly affected by the size of the incoming DNA fragments, since the probability of uptake of a mobile element rapidly decreases with the size of the element and of the incoming fragments (F). This leads to a bias towards the deletion of mobile elements by recombination, especially the largest ones. In spite of this asymmetry, some mobile elements can integrate the genome via natural transformation, following homologous recombination between large regions of high sequence similarity (G) or homology-facilitated illegitimate recombination in short regions of sequence similarity (H).Natural transformation was the first described mechanism of HGT. Its discovery, in the first half of the 20th century, was instrumental in demonstrating that DNA is the support of genetic information. This mechanism is also regularly used to genetically engineer bacteria. Researchers have thus been tantalized by the lack of any sort of consensus regarding the raison d’être of natural transformation.Croucher, Fraser, and colleagues propose that the small size of recombining DNA fragments arising from transformation biases the outcome of recombination towards the deletion of chromosomal genetic material (Fig 1F). Incoming DNA carrying the core genes that flank a mobile element, but missing the element itself, can provide small DNA fragments that become templates to delete the element from the recipient genome (Fig 1E). The inverse scenario, incoming DNA carrying the core genes and a mobile element absent from the genome, is unlikely due to the mobile element being large and the recombining transformation fragments being small. Importantly, this mechanism most efficiently removes the loci at low frequency in the population because incoming DNA is more likely to lack such intervening sequences when these are rare. Invading mobile genetic elements are initially at low frequencies in populations and will be frequently deleted by this mechanism. Hence, recombination will be strongly biased towards the deletion or inactivation of large mobile elements such as phages, integrative conjugative elements, and pathogenicity islands. Simulations at a population scale show that transformation could even counteract the horizontal spread of mobile elements.An obvious limit of natural transformation is that it can''t cope with mobile genetic elements that rapidly take control of the cell, such as virulent phages, or remain extra-chromosomal, such as plasmids. Another limit of transformation is that it facilitates the acquisition of costly mobile genetic elements [7,8], especially if these are small. When these elements replicate in the genome, as is the case of transposable elements, they may become difficult to remove by subsequent events of transformation. Further work will be needed to quantify the costs associated with such infections.Low-frequency adaptive genes might be deleted through transformation in the way proposed for mobile genetic elements. However, adaptive genes rise rapidly to high frequency in populations, becoming too frequent to be affected by transformation. Interestingly, genetic control of transformation might favor the removal of mobile elements incurring fitness costs while preserving those carrying adaptive traits [3]. Transformation could, thus, effectively cure chromosomes and other replicons of deleterious mobile genetic elements integrated in previous events of horizontal gene transfer while preserving recently acquired genes of adaptive value.Prokaryotes encode an arsenal of immune systems to prevent infection by mobile elements and several regulatory systems to repress their expression [9]. Under the new model (henceforth named the chromosomal curing model), transformation has a key, novel position in this arsenal because it allows the expression of the incoming DNA while subsequently removing deleterious elements from the genome.Mobile elements encode their own tools to evade the host immune systems [9]. Accordingly, they search to affect natural transformation [3]. Some mobile genetic elements integrate at, and thus inactivate, genes encoding the machineries required for DNA uptake or recombination. Other elements express nucleases that degrade exogenous DNA (precluding its uptake). These observations suggest an arms race evolutionary dynamics between the host, which uses natural transformation to cure its genome, and mobile genetic elements, which target these functions for their own protection. This gives further credibility to the hypothesis that transformation is a key player in the intra-genomic conflicts between prokaryotes and their mobile elements.Previous studies have proposed alternative explanations for the evolution of natural transformation, including the possibility that it was caused by selection for allelic recombination and horizontal gene transfer [10], for nutrient acquisition [11], or for DNA repair [12]. The latter hypothesis has recently enjoyed regained interest following observations that DNA-damage agents induce transformation [13,14], along with intriguing suggestions that competence might be advantageous even in the absence of DNA uptake [15,16]. The hypothesis that transformation evolved to acquire nutrients has received less support in recent years.Two key specific traits of transformation—host genetic control of the process and selection for conspecific DNA—share some resemblance with recombination processes occurring during sexual reproduction. Yet, the analogy between the two processes must be handled with care because transformation results, at best, in gene conversion of relatively small DNA fragments from another individual. The effect of sexual reproduction on genetic linkage is thought to be advantageous in the presence of genetic drift or weak and negative or fluctuating epistasis [17]. Interestingly, these conditions could frequently be met by bacterial pathogens [18], which might explain why there are so many naturally transformable bacteria among human pathogens, such as Streptococcus pneumoniae, Helicobacter pylori, Staphylococcus aureus, Haemophilus influenzae, or Neisseria spp. The most frequent criticism to the analogy between transformation and sexual reproduction is that environmental DNA from dead individuals is unlikely to carry better alleles than the living recipient [11]. This difficulty is circumvented in bacteria that actively export copies of their DNA to the extracellular environment. Furthermore, recent theoretical studies showed that competence could be adaptive even when the DNA originates from individuals with lower fitness alleles [19,20]. Mathematically speaking, sexual exchanges with the dead might be better than no exchanges at all.The evaluation of the relative merits of the different models aiming to explain the raison d’être of natural transformation is complicated because they share several predictions. For example, the induction of competence under maladapted environments can be explained by the need for DNA repair (more DNA damage in these conditions), by selection for adaptation (through recombination or HGT), and by the chromosomal curing model because mobile elements are more active under such conditions (leading to more intense selection for their inactivation). Some of the predictions of the latter model—the rapid diversification and loss of mobile elements and their targeting of the competence machinery—can also be explained by models involving competition between mobile elements and their antagonistic association with the host. One of the great uses of mathematical models in biology resides in their ability to pinpoint the range of parameters and conditions within which each model can apply. The chromosomal curing model remains valid under broad ranges of variation of many of its key variables. This might not be the case for alternative models [3].While further theoretical work will certainly help to specify the distinctive predictions of each model, realistic experimental evolutionary studies will be required to test them. Unfortunately, the few pioneering studies on this topic have given somewhat contradictory conclusions. Some showed that natural transformation was beneficial to bacteria adapting under suboptimal environments (e.g., in times of starvation or in stressful environments) [21,22], whereas others showed it was most beneficial under exponential growth and early stationary phase [23]. Finally, at least one study showed a negative effect of transformation on adaptation [24]. Part of these discrepancies might reveal differences between species, which express transformation under different conditions. They might also result from the low intraspecies genetic diversity in these experiments, in which case the use of more representative communities might clarify the conditions favoring transformation.Macroevolutionary studies on natural transformation are hindered by the small number of prokaryotes known to be naturally transformable (82 species, following [25]). In itself, this poses a challenge: if transformation is adaptive, then why does it seem to be so rare? The benefits associated with deletion of mobile elements, with functional innovation, or with DNA repair seem sufficiently general to affect many bacterial species. The trade-offs between cost and benefit of transformation might lead to its selection only when mobile elements are particularly deleterious for a given species or when species face particular adaptive challenges. According to the chromosomal curing model, selection for transformation would be stronger in highly structured environments or when recombination fragments are small. There is also some evidence that we have failed to identify numerous naturally transformable prokaryotes, in which case the question above may lose part of its relevance. Many genomes encode key components of the transformation machinery, suggesting that this process might be more widespread than currently acknowledged [25]. As an illustration, the ultimate model for research in microbiology—Escherichia coli—has only recently been shown to be naturally transformable; the conditions leading to the expression of this trait remain unknown [26].The chromosomal curing model might contribute to explaining other mechanisms shaping the evolution of prokaryotic genomes beyond the removal of mobile elements. Transformation-mediated deletion of genetic material, especially by homology-facilitated illegitimate recombination (Fig 1H), could remove genes involved in the mobility of the genetic elements, facilitating the co-option by the host of functions encoded by mobile genetic elements. Several recent studies have pinpointed the importance of such domestication processes in functional innovation and bacterial warfare [27]. The model might also be applicable to other mechanisms that transfer small DNA fragments between cells. These processes include gene transfer agents [28], extracellular vesicles [29], and possibly nanotubes [30]. The chromosomal curing model might help unravel their ecological and evolutionary impact. 相似文献
18.
Active learning methods have been shown to be superior to traditional lecture in terms of student achievement, and our findings on the use of Peer-Led Team Learning (PLTL) concur. Students in our introductory biology course performed significantly better if they engaged in PLTL. There was also a drastic reduction in the failure rate for underrepresented minority (URM) students with PLTL, which further resulted in closing the achievement gap between URM and non-URM students. With such compelling findings, we strongly encourage the adoption of Peer-Led Team Learning in undergraduate Science, Technology, Engineering, and Mathematics (STEM) courses.Recent, extensive meta-analysis of over a decade of education research has revealed an overwhelming consensus that active learning methods are superior to traditional, passive lecture, in terms of student achievement in post-secondary Science, Technology, Engineering, and Mathematics (STEM) courses [1]. In light of such clear evidence that traditional lecture is among the least effective modes of instruction, many institutions have been abandoning lecture in favor of “flipped” classrooms and active learning strategies. Regrettably, however, STEM courses at most universities continue to feature traditional lecture as the primary mode of instruction.Although next-generation active learning classrooms are becoming more common, large instructor-focused lecture halls with fixed seating are still the norm on most campuses—including ours, for the time being. While there are certainly ways to make learning more active in an amphitheater, peer-interactive instruction is limited in such settings. Of course, laboratories accompanying lectures often provide more active learning opportunities. But in the wake of commendable efforts to increase rigorous laboratory experiences at the sophomore and junior levels at Syracuse University, a difficult decision was made for the two-semester, mixed-majors introductory biology sequence: the lecture sections of the second semester course were decoupled from the laboratory component, which was made optional. There were good reasons for this change, from both departmental and institutional perspectives. However, although STEM students not enrolling in the lab course would arguably be exposed to techniques and develop foundational process skills in the new upper division labs, we were concerned about the implications for achievement among those students who would opt out of the introductory labs. Our concerns were apparently warranted, as students who did not take the optional lab course, regardless of prior achievement, earned scores averaging a letter grade lower than those students who enrolled in the lab. However, students who opted out of the lab but engaged in Peer-Led Team Learning (PLTL) performed at levels equivalent to students who also took the lab course [2].Peer-Led Team Learning is a well-defined active learning model involving small group interactions between students, and it can be used along with or in place of the traditional lecture format that has become so deeply entrenched in university systems (Fig 1, adapted from [3]). PLTL was originally designed and implemented in undergraduate chemistry courses [4,5], and it has since been implemented in other undergraduate science courses, such as general biology and anatomy and physiology [6,7]. Studies on the efficacy of PLTL have shown improvements in students’ grade performance, attitudes, retention in the course [6–11], conceptual reasoning [12], and critical thinking [13], though findings related to the critical thinking benefits for peer leaders have not been consistent [14].Open in a separate windowFig 1The PLTL model.In the PLTL workshop model, students work in small groups of six to eight students, led by an undergraduate peer leader who has successfully completed the same course in which their peer-team students are currently enrolled. After being trained in group leadership methods, relevant learning theory, and the conceptual content of the course, peer leaders (who serve as role models) work collaboratively with an education specialist and the course instructor to facilitate small group problem-solving. Leaders are not teachers. They are not tutors. They are not considered to be experts in the content, and they are not expected to provide answers to the students in the workshop groups. Rather, they help mentor students to actively construct their own understanding of concepts. 相似文献
19.
20.
Lymph nodes are meeting points for circulating immune cells. A network of reticular cells that ensheathe a mesh of collagen fibers crisscrosses the tissue in each lymph node. This reticular cell network distributes key molecules and provides a structure for immune cells to move around on. During infections, the network can suffer damage. A new study has now investigated the network’s structure in detail, using methods from graph theory. The study showed that the network is remarkably robust to damage: it can still support immune responses even when half of the reticular cells are destroyed. This is a further important example of how network connectivity achieves tolerance to failure, a property shared with other important biological and nonbiological networks.Lymph nodes are critical sites for immune cells to connect, exchange information, and initiate responses to foreign invaders. More than 90% of the cells in each lymph node—the T and B lymphocytes of the adaptive immune system—only reside there temporarily and are constantly moving around as they search for foreign substances (antigen). When there is no infection, T and B cells migrate within distinct regions. But lymph node architecture changes dramatically when antigen is found, and an immune response is mounted. New blood vessels grow and recruit vast numbers of lymphocytes from the blood circulation. Antigen-specific cells divide and mature into “effector” immune cells. The combination of these two processes—increased influx of cells from outside and proliferation within—can make a lymph node grow 10-fold within only a few days [1]. Accordingly, the structural backbone supporting lymph node function cannot be too rigid; otherwise, it would impede this rapid organ expansion. This structural backbone is provided by a network of fibroblastic reticular cells (FRCs) [2], which secrete a form of collagen (type III alpha 1) that produces reticular fibers—thin, threadlike structures with a diameter of less than 1 μm. Reticular fibers cross-link and form a spider web–like structure. The FRCs surrounding this structure form the reticular cell network (Fig 1), which was first observed in the 1930s [3]. Interestingly, experiments in which the FRCs were destroyed showed that the collagen fiber network remained intact [4].Open in a separate windowFig 1Structure of the reticular cell network.The reticular cell network is formed by fibroblastic reticular cells (FRCs) whose cell membranes ensheathe a core of collagen fibers that acts as a conduit system for the distribution of small molecules [5]. In most other tissues, collagen fibers instead reside outside cell membranes, where they form the extracellular matrix. Inset: graph structure representing the FRCs in the depicted network as “nodes” (circles) and the direct connections between them as “edges” (lines). Shape and length of the fibers are not represented in the graph.Reticular cell networks do not only support lymph node structure; they are also important players in the immune response. Small molecules from the tissue environment or from pathogens, such as viral protein fragments, can be distributed within the lymph node through the conduit system formed by the reticular fibers [5]. Some cytokines and chemokines that are vital for effective T cell migration—and the nitric oxide that inhibits T cell proliferation [6]—are even produced by the FRCs themselves. Moreover, the network is thought of as a “road system” for lymphocyte migration [7]: in 2006, a seminal study found that lymphocytes roaming through lymph nodes were in contact with network fibers most of the time [8]. A few years before, it had become possible to observe lymphocyte migration in vivo by means of two-photon microscopy [9]. Movies from these experiments strikingly demonstrated that individual cells were taking very different paths, engaging in what appeared to be a “random walk.” But these movies did not show the structures surrounding the migrating cells, which created an impression of motion in empty space. Appreciating the role of the reticular cell network in this pattern of motion [8] suggested that the complex cell trajectories reflect the architecture of the network along which the cells walk.Given its important functions, it is surprising how little we know about the structure of the reticular cell network—compared to, for instance, our wealth of knowledge on neuron connectivity in the brain. In part this is because the reticular cells are hard to visualize. In vivo techniques like two-photon imaging do not provide sufficient resolution to reliably capture the fine-threaded mesh. Instead, thin tissue sections are stained with fluorescent antibodies that bind to the reticular fibers and are imaged with high-resolution confocal microscopy to reveal the network structure. One study [10] applied this method to determine basic parameters such as branch length and the size of gaps between fibers. Here, we discuss a recent study by Novkovic et al. [11] that took a different approach to investigating properties of the reticular cell network structure: they applied methods from graph theory.Graph theory is a classic subject in mathematics that is often traced back to Leonhard Euler’s stroll through 18th-century Königsberg, Prussia. Euler could not find a circular route that crossed each of the city’s seven bridges exactly once, and wondered how he could prove that such a route does not exist. He realized that this problem could be phrased in terms of a simple diagram containing points (parts of the city) and lines between them (bridges). Further detail, such as the layout of city’s streets, was irrelevant. This was the birth of graph theory—the study of objects consisting of points (nodes) connected by lines (edges). Graph theory has diverse applications ranging from logistics to molecular biology. Since the beginning of this century, there has been a strong interest in applying graph theory to understand the structure of networks that occur in nature—including biological networks, such as neurons in the brain, and more recently, social networks like friendships on Facebook. Various mathematical models of network structures have been developed in an attempt to understand network properties that are relevant in different contexts, such as the speed at which information spreads or the amount of damage that a network can tolerate before breaking into disconnected parts. Three well-known network topologies are random, small-world, and scale-free networks (Box 1). Novkovic et al. modeled reticular cell networks as graphs by considering each FRC to be a node and the fiber connections between FRCs to be edges (Fig 1).