首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Scabies has recently gained international attention, with the World Health Organization (WHO) recognizing it as a neglected tropical disease. The International Alliance for the Control of Scabies recently formed as a partnership of more than 15 different countries, with an aim to lead a consistent and collaborative approach to preventing and controlling scabies globally. Scabies is most prevalent in low-resource and low socioeconomic areas that experience overcrowding and has a particularly high prevalence in children, with an estimated 5% to 10% in endemic countries. Scabies is widespread in remote Aboriginal and Torres Strait Islander communities in Australia with the prevalence of scabies in Aboriginal and Torres Strait Islander children in remote communities estimated to be as high as 33%, making it the region with the third highest prevalence in the world. This population group also have very high rates of secondary complications of scabies such as impetigo, poststreptococcal glomerulonephritis (PSGN), and rheumatic heart disease (RHD). This article is a narrative review of scabies in remote Aboriginal and Torres Strait Islander populations in Australia, including clinical manifestations of disease and current treatment options and guidelines. We discuss traditional approaches to prevention and control as well as suggestions for future interventions including revising Australian treatment guidelines to widen the use of oral ivermectin in high-risk groups or as a first-line treatment.

Scabies has recently gained international attention, with the World Health Organization (WHO) recognizing it as a neglected tropical disease [1]. The International Alliance for the Control of Scabies recently formed as a partnership of more than 15 different countries, with an aim to lead a consistent and collaborative approach to preventing and controlling scabies globally [2]. In Australia, 10 million dollars was awarded to the Murdoch Children’s Research Institute to implement the World Scabies Elimination Program—an initiative aimed at collecting data from many affected countries and scaling up mass drug administration (MDA) [3].Scabies is most prevalent in low-resource and low socioeconomic areas that experience overcrowding and has a particularly high prevalence in children, with an estimated 5% to 10% in endemic countries [4,5]. The 2015 Global Burden of Disease Study ranked scabies with the 101st highest disability-adjusted life years (DALYs) estimate out of 246 conditions [6]. This is, however, likely underestimated as secondary complications, such as impetigo and kidney damage, were not included in this study [6,7]. A study from Fiji showed that 94% of impetigo was attributable to scabies [8], and it is estimated that approximately half of the instances of acute poststreptococcal glomerulonephritis (PSGN) in tropical regions can be attributed to skin infections [9].  相似文献   

2.
Two articles published earlier this year in the International Journal of Epidemiology [1,2] have re-ignited the debate over the World Health Organization’s long-held recommendation of mass-treatment of intestinal helminths in endemic areas. In this note, we discuss the content and relevance of these articles to the policy debate, and review the broader research literature on the educational and economic impacts of deworming. We conclude that existing evidence still indicates that mass deworming is a cost-effective health investment for governments in low-income countries where worm infections are widespread.  相似文献   

3.
We describe an unusual case of type 2 leprosy reaction (T2R) with septic shock–like features induced by helminth infection in a 31-year-old Moluccan male patient with a history of completed treatment of WHO multidrug therapy (MDT)–multibacillary (MB) regimen 2 years before admission. During the course of illness, the patient had numerous complications, including septic shock, anemia, and disseminated intravascular coagulation (DIC). Nevertheless, antibiotic therapies failed to give significant results, and the source of infection could not be identified. Helminth infection was subsequently revealed by endoscopic examination followed by parasitological culture. Resolution of symptoms and normal level of organ function–specific markers were resolved within 3 days following anthelmintic treatment. This report demonstrated the challenge in the diagnosis and treatment of severe T2R. Given that helminth infections may trigger severe T2R that mimics septic shock, health professionals need to be aware of this clinical presentation, especially in endemic regions of both diseases.

Type 2 leprosy reaction (T2R) is a type III hypersensitivity reaction that can occur in people with lepromatous or borderline lepromatous leprosy before, during, or after completion of multidrug therapy (MDT). Its clinical manifestations are highly variable, which can be limited to the skin or accompanied by systemic disruption [1,2]. Uncommonly, it may also present with fever, hypotension, and tachycardia that mimic septic shock [3]. Helminth infections have been demonstrated to modulate the host immune response and induce leprosy reaction [4]. While concurrent helminth infections may benefit true sepsis by preventing exaggerated inflammation and severe pathology [5], treating helminth coinfection contributed directly to the dramatic improvement of the patient’s clinical and laboratory outcomes in this report.  相似文献   

4.
The Zika virus outbreak in the Americas has caused global concern. To help accelerate this fight against Zika, we launched the OpenZika project. OpenZika is an IBM World Community Grid Project that uses distributed computing on millions of computers and Android devices to run docking experiments, in order to dock tens of millions of drug-like compounds against crystal structures and homology models of Zika proteins (and other related flavivirus targets). This will enable the identification of new candidates that can then be tested in vitro, to advance the discovery and development of new antiviral drugs against the Zika virus. The docking data is being made openly accessible so that all members of the global research community can use it to further advance drug discovery studies against Zika and other related flaviviruses.The Zika virus (ZIKV) has emerged as a major public health threat to the Americas as of 2015 [1]. We have previously suggested that it represents an opportunity for scientific collaboration and open scientific exchange [2]. The health of future generations may very well depend on the decisions we make, our willingness to share our findings quickly, and open collaboration to rapidly find a cure for this disease. Since February 1, 2016, when the World Health Organization deemed the cluster of microcephaly cases, Guillain-Barré, and other neurological disorders associated with ZIKV in Latin America and the Caribbean as constituting a Public Health Emergency of International Concern [3] (PHEIC), we have seen a rapid increase in publications (S1 References and main references). We [2] and others [4,5] described steps that could be taken to initiate a drug discovery program on ZIKV. For example, computational approaches, such as virtual screening of chemical libraries or focused screening to repurpose FDA and/or EU-approved drugs, can be used to help accelerate the discovery of an anti-ZIKV drug. An antiviral drug discovery program can be initiated using structure-based design, based on homology models of the key ZIKV proteins. With the lack of structural information regarding the proteins of ZIKV, we built homology models for all the ZIKV proteins, based on close homologs such as dengue virus, using freely available software [6] (S1 Table). These were made available online on March 3, 2016. We also predicted the site of glycosylation of glycoprotein E as Asn154, which was recently experimentally verified [7].Since the end of March 2016, we have now seen two cryo-EM structures and 16 crystal structures of five target classes (S1 Table). These structures, alongside the homology models, represent potential starting points for docking-based virtual screening campaigns to help find molecules that are predicted to have high affinity with ZIKV proteins. These predictions can then be tested against the virus in cell-based assays and/or using individual protein-based assays. There are millions of molecules available that can be assayed, but which ones are likely to work, and how should we prioritize them?In March, we initiated a new open collaborative project called OpenZika (Fig 1), with IBM’s World Community Grid (WCG, worldcommunitygrid.org), which has been used previously for distributed computing projects (S2 Table). On May 18, 2016, the OpenZika project began the virtual screening of ~6 million compounds that are in the ZINC database (Fig 1), as well as the FDA-approved drugs and the NIH clinical collection, using AutoDock Vina and the homology models and crystal structures (S1 Table, S1 Text, S1 References), to discover novel candidate compounds that can potentially be developed into new drugs for treating ZIKV. These will be followed by additional virtual screens with a new ZINC library of ~38 million compounds, and the PubChem database (at most ~90 million compounds), after their structures are prepared for docking.Open in a separate windowFig 1Workflow for the OpenZika project.A. Docking input files of the targets and ligands are prepared, and positive control docking studies are performed. The crystallographic binding mode of a known inhibitor is shown as sticks with dark purple carbon atoms, while the docked binding mode against the NS5 target from HCV has cyan carbons. Our pdbqt files of the libraries of compounds we screen are also openly accessible (http://zinc.docking.org/pdbqt/). B. We have already prepared the docking input files for ~6 million compounds from ZINC (i.e., the libraries that ALP previously used in the GO Fight Against Malaria project on World Community Grid), which are currently being used in the initial set of virtual screens on OpenZika. C. IBM’s World Community Grid is an internet-distributed network of millions of computers (Mac, Windows, and Linux) and Android-based tablets or smartphones in over 80 countries. Over 715,000 volunteers donate their dormant computer time (that would otherwise be wasted) towards different projects that are both (a) run by an academic or nonprofit research institute, and (b) are devoted to benefiting humanity. D. OpenZika is harnessing World Community Grid to dock millions of commercially available compounds against multiple ZIKV homology models and crystal structures (and targets from related viruses) using AutoDock Vina (AD Vina). This ultimately produces candidates (virtual hits that produced the best docking scores and displayed the best interactions with the target during visual inspection) against individual proteins, which can then be prioritized for in vitro testing by collaborators. After it is inspected, all computational data against ZIKV targets will be made open to the public on our website (http://openzika.ufg.br/experiments/#tab-id-7), and OpenZika results are also available upon request. The computational and experimental data produced will be published as quickly as possible.Initially, compounds are being screened against the ZIKV homologs of drug targets that have been well-validated in research against dengue and hepatitis C viruses, such as NS5 and Glycoprotein E (S1 Table, S1 Text, S1 References). These may allow us to identify broad-spectrum antivirals against multiple flaviviruses, such as dengue virus, West Nile virus, and yellow fever virus. In addition, docking against the crystal structure of a related protein from a different pathogen can sometimes discover novel hits against the pathogen of interest [8].As well as applying docking-based filters, the compounds virtually screened on OpenZika will also be filtered using machine learning models (S1 Text, S1 References). These should be useful selection criteria for subsequent tests by our collaborators in whole-cell ZIKV assays, to verify their antiviral activity for blocking ZIKV infection or replication. Since all OpenZika docking data will be in the public domain soon after they are completed and verified, we and other labs can then advance the development of some of these new virtual candidates into experimentally validated hits, leads, and drugs through collaborations with wet labs.This exemplifies open science, which should help scientists around the world as they address the long and arduous process of discovering and developing new drugs. Screening millions of compounds against many different protein models in this way would take far more resources and time than any academic researcher could generally obtain or spend. As of August 16, 2016, we have submitted 894 million docking jobs. Over 6,934 CPU years have been donated to us, enabling over 439 million different docking jobs. We recently selected an initial batch of candidates for NS3 helicase (data openly available at http://openzika.ufg.br/experiments/#tab-id-7), for in vitro testing. Without the unique community of volunteers and tremendous resources provided by World Community Grid, this project would have been very difficult to initiate in a reasonable time frame at this scale.The OpenZika project will ultimately generate several billion docking results, which could make it the largest computational drug discovery project ever performed in academia. The potential challenges we foresee will be finding laboratories with sufficient funding to pursue compounds, synthesize analogs, and develop target-based assays to validate our predictions and generate SAR (Structure-Activity Relationship) data to guide the process of developing the new hits into leads and then drugs. Due to the difficult nature of drug discovery and the eventual evolution of drug resistance, funding of ZIKV research once initiated will likely need to be sustained for several years, if not longer (e.g., HIV research has been funded for decades). As with other WCG projects, once scientists identify experimentally validated leads, finding a company to license them and pursue them in clinical trials and beyond will need incentives such as the FDA Tropical Disease Priority voucher, [9] which has a financial value on the open market [10].By working together and opening our research to the scientific community, many other labs will also be able to take promising molecular candidates forward to accelerate progress towards defeating the ZIKV outbreak. We invite any interested researcher to join us (send us your models or volunteer to assay the candidates we identify through this effort against any of the flaviviruses), and we hope new volunteers in the general public will donate their dormant, spare computing cycles to this cause. We will ultimately report the full computational and experimental results of this collaboration.

Advantages and Disadvantages of OpenZika

Advantages
  • Open Science could accelerate the discovery of new antivirals using docking and virtual screening
  • Docking narrows down compounds to test, which saves time and money
  • Free to use distributed computing on World Community Grid, and the workflow is simpler than using conventional supercomputers
Disadvantages
  • Concern around intellectual property ownership and whether companies will develop drugs coming from effort
  • Need for experimental assays will always be a factor
  • Testing in vitro and in vivo is not free, nor are the samples of the compounds
  相似文献   

5.
Ribavirin is the only available Lassa fever treatment. The rationale for using ribavirin is based on one clinical study conducted in the early 1980s. However, reanalysis of previous unpublished data reveals that ribavirin may actually be harmful in some Lassa fever patients. An urgent reevaluation of ribavirin is therefore needed.

Fifty years after its discovery, Lassa fever remains uncontrolled, and mortality remains unacceptably high. Since 2015, Nigeria has been experiencing increasingly large outbreaks of Lassa fever, with new peaks reached in 2016, 2017, and 2018. In 1987, McCormick and colleagues reported a case fatality rate (CFR) of 16.5% among 441 patients hospitalized in Sierra Leone [1]. In Nigeria in 2019, 124 deaths were recorded among 554 laboratory-confirmed cases for a CFR of 22% [2].Ribavirin is the only available Lassa fever–specific treatment and has been used routinely for over 25 years. However, intravenous ribavirin is not licensed for Lassa fever. Its mechanism of action is unclear, it is expensive and hard to source, and it has well-known toxicities [3]. Therefore, the evidence for using ribavirin in Lassa fever deserves careful scrutiny. The emergence of potential new therapeutics for Lassa fever, such as favipiravir and monoclonal antibodies, adds further weight to the case for reconsidering the role of ribavirin since the evaluation of new drugs in clinical trials requires a comparison against existing treatments with a known efficacy and safety profile [4,5].The rationale for using ribavirin in Lassa fever is primarily based on one clinical study conducted in Sierra Leone in the late 1970s and early 1980s. McCormick and colleagues [6] reported that in Lassa fever patients with a serum aspartate aminotransferase (AST) level of ≥150 IU/L, the use of intravenous ribavirin within the first 6 days of illness reduced the fatality rate from 61% (11/18) with no ribavirin to 5% (1/20) (p = 0.002). These authors concluded that ribavirin is effective in the treatment of Lassa fever. However, there are long-standing concerns about the methods used in this study. Although randomization was used to assign patients to treatment groups, the comparisons presented were not according to original randomized groups, and we have reconstructed their derivation (Fig 1). Serious limitations to the comparisons presented include the use of historic controls, inclusion of pregnant women in the control group but their exclusion from the ribavirin group (case fatality is around 2-fold higher in pregnant women than nonpregnant patients), and post hoc merging of treatment groups. Despite this and the fact that the results only supported the use of ribavirin in nonpregnant adult patients with AST ≥150 IU/L, this study is the basis upon which ribavirin is now used in all patients with Lassa fever, including children, pregnant women, and people with normal liver function.Open in a separate windowFig 1Reconstruction of the McCormick et al. data.AST, aspartate aminotransferase; PW, pregnant women. † Discrepancy within McCormick et al, with 39 patients reported treated with oral ribavirin but only 38 (14+24) outcomes reported. ‡ Discrepancy within McCormick et al, with table 1 reporting 12/63 but text reporting 13/62.It has been well known among Lassa specialists that the McCormick study reports a subset of a much larger dataset assembled by the Lassa treatment unit in Sierra Leone and that a report on the full dataset was commissioned by the United States Army Medical Research and Development Command. One of us (PH) therefore submitted a freedom of information (FOI) request to access this report. The full report and an accompanying memo are available, and we encourage readers to access and read the materials [7,8]. The memo states that some of the original trial records were unavailable, and the data should be “interpreted with extreme caution.” Nonetheless, the report presents data from 1977 through to 1991 on 807 Lassa fever patients with a known outcome that were assigned to different ribavirin treatment regimens. These newly available data raise important questions about the safety and efficacy of ribavirin for the treatment of Lassa fever.The original data were lost during the civil war in Sierra Leone, but the report contains tables showing the distribution of characteristics of the whole population according to treatment group, an appendix showing individual data for the 405 patients who died, and results of a logistic regression analysis comparing the effect of ribavirin with no treatment for some of the ribavirin regimens, after adjusting for patient characteristics. Based on these data, we derived aggregated datasets containing the number of deaths according to treatment groups and individual characteristics. We combined groups I (“No treatment given”) and X (“Drugs were not available”) as no treatment and all groups in which ribavirin was administered (II, III, and V to IX) as ribavirin. Exhibit III-8 in the FOI report presented case fatality by treatment group and AST, from which we derived crude odds ratios (ORs) comparing ribavirin with no treatment. The logistic regression reported in Exhibit III-9 was restricted to “those treatment groups that yielded the lowest case fatality rates with respect to untreated patients in the high severity patient illness category” (groups II, III, V, and VII). It was adjusted for age, gender, time to admission, time to treatment, length of stay, and log(AST). We also reconstructed analyses by digitizing the data on individuals who died in Appendix D, calculating the number of deaths according to treatment group and AST, and subtracting these numbers from the totals presented in Exhibit III-2. These allowed us to estimate overall mortality ORs before and after adjusting for ribavirin, although the numbers did not entirely match, and so the number of deaths was reduced in some small groups.Estimates of the effect of oral and intravenous ribavirin from the McCormick study and of all ribavirin from the full report are shown in Fig 2. Based on the crude ORs derived from Exhibit III-8, ribavirin reduced mortality only in patients with serum AST ≥150 IU/L, with less benefit (OR 0.48 [95% CI 0.30 to 0.78]) than reported by McCormick and colleagues. However, ribavirin appeared to increase mortality in patients with serum AST <150 IU/L (2.90 [1.42 to 5.95]). In fact, in our analysis, the only stratum in which ribavirin appeared protective (0.38 [0.21 to 0.70]) was serum AST >300 IU/L (Table H in S1 Text). The logistic regression reported in the FOI report suggested a modest reduction in mortality, but the reasons for the choice of treatment groups compared were unclear. In the reconstructed analyses, ribavirin was associated with overall increased mortality (2.12 [1.67, 2.68]), although this was attenuated after adjustment for AST (1.48 [1.05, 2.08]).Open in a separate windowFig 2Forest plot of the OR of death in treatment and risk subgroups.AST, aspartate aminotransferase; FOI, freedom of information; OR, odds ratio.In our view, there is a compelling case to reevaluate the role of ribavirin in the care of patients with Lassa fever. The data suggest that ribavirin treatment may harm Lassa fever patients with AST <150 IU/L. The limitations revealed by the US Army report, such as large amounts of missing data, unclear treatment allocation practices, imbalances in treatment groups, and errors in coding serology results, cast further doubt on the conclusions of the McCormick study. This aligns with 2 recent systematic reviews by Eberhardt and colleagues and Cheng and colleagues, which concluded that the efficacy of ribavirin in Lassa fever was uncertain because of critical risk of bias in existing studies [9,10].Challenging a quarter of century of clinical practice is difficult. The first step is to acknowledge inadequacies in our knowledge and to ensure that treatment recommendations for Lassa fever better reflect the (weak) strength of evidence for ribavirin in different patient populations. Vigorous efforts should be made to engage clinicians and patients in designing a placebo-controlled trial to assess the safety and efficacy of ribavirin treatment in Lassa fever patients, particularly in those with milder disease (as may be indicated by an admission AST <150 IU/L) in whom the available evidence is compatible with ribavirin causing more harm than good.In conclusion, Lassa fever patients are receiving a drug that may lack efficacy or cause harm. It is incumbent on us to ensure that the next 25 years of Lassa fever treatment are built on more solid foundations.  相似文献   

6.
Neurocysticercosis (NCC), the infection of the nervous system by the cystic larvae of Taenia solium, is a highly pleomorphic disease because of differences in the number and anatomical location of lesions, the viability of parasites, and the severity of the host immune response. Most patients with parenchymal brain NCC present with few lesions and a relatively benign clinical course, but massive forms of parenchymal NCC can carry a poor prognosis if not well recognized and inappropriately managed. We present the main presentations of massive parenchymal NCC and their differential characteristics.

Infection of the central nervous system by the larval stage of Taenia solium—the pork tapeworm—causes neurocysticercosis (NCC), a highly pleomorphic disease [1]. This pleomorphism is partly related to differences in the number and anatomical location of lesions, the viability of parasites, and the severity of the host immune response against the infection. Cysticerci may be located within the brain parenchyma, the subarachnoid space, the ventricular system, the spinal cord, the sellar region, or even the subdural space.Most patients with parenchymal NCC present with few lesions and a clinical course that is often more benign than that observed in the subarachnoid and ventricular forms of NCC, where a sizable proportion of patients are left with disabling sequelae or may even die as a result of the disease [2,3]. Nevertheless, massive forms of parenchymal NCC require special attention to reduce the risk of complications related to the disease itself or to an inadequate treatment. Here, we present the main presentations of massive parenchymal NCC and their differential characteristics. There is no standardized definition of how many cysts constitute massive NCC. While the term “massive” has usually been applied when there are more than 100 lesions in the brain parenchyma, others have used smaller numbers (50), and there is not a defined cutoff.  相似文献   

7.
Estimating the case-fatality risk (CFR)—the probability that a person dies from an infection given that they are a case—is a high priority in epidemiologic investigation of newly emerging infectious diseases and sometimes in new outbreaks of known infectious diseases. The data available to estimate the overall CFR are often gathered for other purposes (e.g., surveillance) in challenging circumstances. We describe two forms of bias that may affect the estimation of the overall CFR—preferential ascertainment of severe cases and bias from reporting delays—and review solutions that have been proposed and implemented in past epidemics. Also of interest is the estimation of the causal impact of specific interventions (e.g., hospitalization, or hospitalization at a particular hospital) on survival, which can be estimated as a relative CFR for two or more groups. When observational data are used for this purpose, three more sources of bias may arise: confounding, survivorship bias, and selection due to preferential inclusion in surveillance datasets of those who are hospitalized and/or die. We illustrate these biases and caution against causal interpretation of differential CFR among those receiving different interventions in observational datasets. Again, we discuss ways to reduce these biases, particularly by estimating outcomes in smaller but more systematically defined cohorts ascertained before the onset of symptoms, such as those identified by forward contact tracing. Finally, we discuss the circumstances in which these biases may affect non-causal interpretation of risk factors for death among cases.The case-fatality risk (CFR) is a key quantity in characterizing new infectious agents and new outbreaks of known agents. The CFR can be defined as the probability that a case dies from the infection. Several variations of the definition of “case” are used for different infections, as discussed in Box 1. Under all these definitions, the CFR characterizes the severity of an infection and is useful for planning and determining the intensity of a response to an outbreak [1,2]. Moreover, the CFR may be compared between cases who do and do not receive particular treatments as a way of trying to estimate the causal impact of these treatments on survival. Such causal inference might ideally be done in a randomized trial in which individuals are randomly assigned to treatments, but this is often not possible during an outbreak for logistical, ethical, and other reasons [3]. Therefore, observational estimates of CFR under different treatment conditions may be the only available means to assess the impact of various treatments.

Box 1. Definition of the CFR.

The CFR itself is an ambiguous term, as its definition and value depend on what qualifies an individual to be a “case.” Several different precise definitions of CFR have been used in practice, as have several imprecise ones. The infection-fatality risk (sometimes written IFR) defines a case as a person who has shown evidence of infection, either by clinical detection of the pathogen or by seroconversion or other immune response. Such individuals may or may not be symptomatic, though asymptomatic ones may go undetected. The symptomatic case-fatality risk (sCFR) defines a case as someone who is infected and shows certain symptoms. Infection in many outbreaks is given several gradations, including confirmed (definitive laboratory confirmation), probable (high degree of suspicion, by various clinical and epidemiologic criteria, without laboratory confirmation), and possible or suspected (lower degree of suspicion). This paper describes issues in estimating any of these risks or comparing them across groups, but does not go into the details of each possible definition.Furthermore, unlike risks commonly used in epidemiologic research (e.g., the 5-year mortality risk), the length of the period during which deaths are counted for the CFR is rarely explicit, probably because it is considered to be short enough to avoid ambiguity in the definition of CFR. However, a precise definition of the CFR would need to include the risk period, e.g., the 1-month CFR of Ebola. Clearly, the definition of CFR for a particular investigation should be specified as precisely as possible.However, observational studies conducted in the early phases of an outbreak, when public health authorities are appropriately concentrating on crisis response and not on rigorous study design, are challenging. A common problem is that disease severity of the cases recorded in a surveillance database will differ, perhaps substantially, from that of all cases in the population. This issue has arisen in the present epidemic of Ebola virus disease in West Africa and in many previous outbreaks and epidemics [49] and will continue to arise in future ones.Here we outline two biases that may occur when estimating the CFR in a population from a surveillance database, and three more biases that may occur when comparing the CFR between subgroups to estimate the causal effect of medical interventions. We also briefly consider the applicability of these biases to a different application: comparing the CFR across different groups of people, for example, by geography, sex, age, comorbidities, and other “unchangeable” risk factors. Such factors are “unchangeable” in the sense that they are not candidates for intervention in the setting of the outbreak, though some could, of course, change over longer timescales. The goal of estimating the CFR in groups defined by such unchangeable factors is not to understand the causal role of these factors in mortality, but to develop a predictive model for mortality that might be used to improve prognostic accuracy or identify disparities. Such predictions may be affected by survivorship bias and selection bias, but not by confounding, as we discuss.  相似文献   

8.
Interest in filariasis has found a new impetus now that neglected tropical diseases have their own journal. However, some of the advances published in renowned international journals have completely ignored previous publications on the subject, particularly those in languages other than English. The rapid assessment procedure for loiasis and the mapping of lymphatic filariasis provide two perfect illustrations of this. This problem may seem a bit outdated, given that all “good authors” now publish exclusively in English. It certainly is outdated for most areas of medicine. But, surely, this should not be the case for neglected tropical diseases, for which certain long-standing findings are every bit as important as what may be presented as new discoveries. One possibility would be for certain journals, such as PLOS Neglected Tropical Diseases, to include a specific heading permitting the publication in English of older studies that initially appeared in a language other than English. The texts would be English versions respecting the entirety of the original text. Submission should be accompanied by a presentation of the problem, with details and explanatory comments, with submission at the initiative of the authors of the former article in question or their students or sympathizers.Interest in filariasis has found a new impetus now that neglected tropical diseases (NTDs) have their own journal. However, some of the advances published in renowned international journals have completely ignored previous publications on the subject, particularly those in languages other than English. This Viewpoint article is intended to make us ponder the issue of a language gap or discrimination existing in publishing outcomes and reference citations. This is also the question of deleterious effects of the obligation “to be in English or not to be”.The rapid assessment procedure for loiasis (RAPLOA) and the geographical distribution of lymphatic filariasis provide two perfect illustrations of this.The RAPLOA has recently been widely used to determine the regional endemicity of loiasis and to update existing endemicity data for this disease over its global distribution range. This important work has been recently published in PLOS NTDs [1]. The determination, within a population of the prevalence or, preferably, the annual incidence of episodes, of conjunctival migration by adult worms is a simple, non-invasive, relatively sensitive and specific method for evaluating the endemicity of Loa loa. This approach has proved particularly useful in areas in which both loiasis and onchocerciasis are observed: the mass treatment program to control onchocerciasis is based on the use of ivermectin and there is a risk of adverse treatment outcomes in patients carrying large numbers of L. loa worms [2]. In regions of high endemicity, the correlation between the conjunctival migration index and the microfilarial index is strong overall, both for villages and for age groups. Its use as an epidemiological index was clearly proposed in a publication in 1994 [3]. However, as this article was published in French, in Médecine Tropicale (Marseille), it has never been cited, despite being listed in international databases, including PubMed. A poster communication concerning the same issue had no real impact either, despite being presented at an international congress [4]. The origin of this new epidemiological index (RAPLOA) is systematically attributed to two World Health Organization (WHO) publications in 2001 [5] and 2002 [6]. It is true that the studies reported in these publications validated the concept at a large scale and in different endemic foci.The usefulness of specific clinical manifestations (eye worm and Calabar swelling) to assess L. loa had been recommended by different authors as early as 1950 [7]. But the correlation between the microfilarial index and the frequency of ocular migration has not been studied, and even less attention was paid to the notion of an epidemiological index until the epidemiological studies carried out in Congo Republic (former People''s Republic of the Congo) during the 1980s [8]. However, the index as such was clearly defined in 1994 [3]. Here is a direct translation of an excerpt of the French text published in 1994: “For loiasis, the usual parasitological indices (microfilarial index and mean microfilarial density) are the only measures recognized as providing information about the level of endemicity in humans. In addition to requiring blood samples standardized in terms of both volume and sampling time, these indices do not reflect the real level of parasitism, given the high frequency of infected subjects without microfilaria in the blood. Subjects infested with mature, fertile adult worms, as demonstrated by the removal of a subconjunctival female containing microfilaria from a patient with no detectable microfilaria in the blood, are frequently observed. Two symptoms are both specific and frequent in infected subjects both with and without microfilaria in the blood: subconjunctival migration of an adult worm and elusive, migrating edemas of the hands, wrists and lower part of the forearm” [9]. “The index of subconjunctival filarial migration over the preceding year is particularly useful, because it correlates well with the microfilarial index but is more sensitive” (Figure 1). “Its determination involves precise questioning of the patient, which can be facilitated by the use of a demonstration chart, with diagrams and photographs” (see Figure 2).Open in a separate windowFigure 1Index of the subconjunctival migration of Loa loa adult worms and microfilarial index.Reproduced from Medicine Tropicale [3], released under CC BY 2.0 by Medicine Tropicale. IMSC = Indice de Migration Sous-Conjonctivale in French and Index of the SubConjunctival Migration in English. IM = Indice Microfilarien in French and Microfilarial Index in English.Open in a separate windowFigure 2Illustration of the passage of an adult worm (Loa loa) across the eye.This illustration (diagram and photograph) was made for presentation to patients questioned in endemic regions.The conclusion of this article was formulated as follows: “Screening for foci of filarial endemicity could be improved by the use of a simplified method and the validation of simple, inexpensive indices. Once these foci have been identified, a more precise evaluation can be carried out.”What is most astounding about the two WHO publications cited as the origin of this “new epidemiological index” [5], [6] is that the principal authors come from French-speaking African countries and/or work in or with this institution. They would therefore have been able to understand articles written in French. Furthermore, the WHO has a long-standing culture of multilingualism, particularly in English and French.Against this background, the rejection by the Bulletin of the World Health Organization and by other international journals published in English of an opinion article dealing with this issue and using the example of lymphatic filariasis does not seem to be justified, and is another illustration of “to be in English or not to be.”Indeed, filariasis due to Wuchereria bancrofti is systematically described as endemic in Congo and Gabon, two French-speaking countries, in non-specialist works on tropical medicine and in more specialist publications (WHO) despite a total absence of epidemiologic studies and/or confirmed case report over the last 30 years to prove it. What is certain is that no case was found when the last studies were conducted in these countries at the end of the 1970s and during the 1980s but unfortunately published in French. The studies that we carried out in the Congo as part of the National Project on Onchocerciasis and Other Filariases (between 1982 and 1987) confirmed the presence of four types of human filariasis: onchocerciasis, loaiasis, and the filariases caused by Mansonella perstans and M. streptocerca. There was a total absence of confirmed cases of lymphatic filariasis (bancroftosis). In this case, it is not a question of the attribution of merit for a particular “discovery”, but of basic knowledge of the geographic distribution of a scarcely studied disease, lymphatic filiariasis, in French Central Africa. Taking into account only publications in English, even older and poorly structured data, have been, in our opinion, a source of confusion and has led to false conclusions being drawn about the distribution range of this disease. This undoubtedly highlights the need to update knowledge by carrying out prospective studies (which seem to be underway), but these studies do not seem to be considered a matter of priority given the low levels of resources available and current health priorities.This has drawn us to publish this article in a French-language journal, but together with an entire translation into English [10]. Despite the bilingual nature of this publication, the international PubMed database identifies this article as being published in French, effectively ensuring that it will never be consulted, a classic “catch 22” situation! Indeed, this reference has never yet been cited by another author in a journal published in English. It may be that publication of an article in another language than English makes it more likely that it will not be cited, even if the authors of a subsequent article have access to the journal in which it was published and can understand the language used. Here, we begin to encroach on ethical problems and it is probably best not to delve too deeply. However, suffice it to say that the limited dissemination of publications in a language other than English may account for such equivocal attitudes.The problem is not a rivalry between French and English, but the confrontation between English and all other languages of the world. Moreover, the problem is undoubtedly worse for works published in non–Western European languages such as Chinese, Russian, and Japanese, which are arguably even less accessible.All things considered, this problem may seem a bit outdated, given that all “good authors” now publish exclusively in English. It certainly is outdated for most areas of medicine, where everything that is old is assigned to being nothing more than the history of medicine. But, surely, this should not be the case for NTDs, for which certain long-standing findings are every bit as important as what may be presented as new discoveries.One possibility would be for certain journals, such as PLOS NTDs, to include a specific heading permitting the publication of older studies that initially appeared in a language other than English (and are therefore currently little known). The texts included in this heading would essentially be English versions of these articles previously published in other languages, respecting the entirety of the original text.This would concern studies considered of importance because they highlight a point that remains unclear or describe an aspect considered innovative in a review but for which the originality of the article is due more to an incomplete reference list than to a true advance in knowledge. These articles should be judged in light of the knowledge and technical and methodological means available at the time at which they were initially published. Submission should be accompanied by a presentation of the problem, with details and explanatory comments, with submission at the initiative of the authors of the article in question or their students or sympathizers.  相似文献   

9.
Several issues have been identified with the current programs for the elimination of onchocerciasis that target only transmission by using mass drug administration (MDA) of the drug ivermectin. Alternative and/or complementary treatment regimens as part of a more comprehensive strategy to eliminate onchocerciasis are needed. We posit that the addition of “prophylactic” drugs or therapeutic drugs that can be utilized in a prophylactic strategy to the toolbox of present microfilaricidal drugs and/or future macrofilaricidal treatment regimens will not only improve the chances of meeting the elimination goals but may hasten the time to elimination and also will support achieving a sustained elimination of onchocerciasis. These “prophylactic” drugs will target the infective third- (L3) and fourth-stage (L4) larvae of Onchocerca volvulus and consequently prevent the establishment of new infections not only in uninfected individuals but also in already infected individuals and thus reduce the overall adult worm burden and transmission. Importantly, an effective prophylactic treatment regimen can utilize drugs that are already part of the onchocerciasis elimination program (ivermectin), those being considered for MDA (moxidectin), and/or the potential macrofilaricidal drugs (oxfendazole and emodepside) currently under clinical development. Prophylaxis of onchocerciasis is not a new concept. We present new data showing that these drugs can inhibit L3 molting and/or inhibit motility of L4 at IC50 and IC90 that are covered by the concentration of these drugs in plasma based on the corresponding pharmacological profiles obtained in human clinical trials when these drugs were tested using various doses for the therapeutic treatments of various helminth infections.

Onchocerca volvulus is an obligate human parasite and the causative agent for onchocerciasis, which is a chronic neglected tropical disease prevalent mostly in the sub-Saharan Africa. In 2017, 20.9 million people were infected, with 14.6 million having skin pathologies and 1.15 million having vision loss [1]. The socioeconomic impact of onchocerciasis and the debilitating morbidity caused by the disease prompted the World Health Organization (WHO) to initiate control programs that were first focused on reducing onchocerciasis as a public health problem, and since 2012, the ultimate goal is to eliminate it by 2030 [2]. Over the years, WHO sponsored and coordinated 3 major programs: The Onchocerciasis Control Programme (OCP), the African Programme for Onchocerciasis Control (APOC), and the Onchocerciasis Elimination Program of the Americas (OEPA). Since 1989, the control measures depended on mass drug administration (MDA) annually or biannually with ivermectin, which targets the transmitting stage of parasite, the microfilariae [35]. However, several issues have been identified with the current MDA programs including the need to expand the treatment to more populations depending on baseline endemicity and transmission rates [2,6]. Moreover, it became apparent that alternative and/or complementary treatment regimens as part of a more comprehensive strategy to eliminate onchocerciasis are needed [2]. Ivermectin has only mild to moderate effects on the adult stages of the parasite [79], and there are communities in Africa where the effects of ivermectin are suboptimal [10]. It is also contraindicated in areas of Loa loa co-endemicity [11], as well as in children under the age of 5 and in pregnant women. By relying only on MDA with ivermectin, the most optimistic mathematical modeling predicts that elimination will occur only in 2045 [12].To support the elimination agenda, much of the recent focus has been on improving efficacy outcomes through improved microfilariae control with moxidectin and the discovery of macrofilaricidal drugs that target the adult O. volvulus parasites [1318]. We posit that the addition of “prophylactic” drugs or therapeutic drugs that can be utilized in a prophylactic strategy to the toolbox of present microfilaricidal drugs and/or future macrofilaricidal treatment regimens will not only improve the chances of meeting the elimination goals but may also hasten the time for elimination and support achieving a sustained elimination of onchocerciasis. These “prophylactic” drugs will target the infective third- (L3) and fourth-stage (L4) larvae of O. volvulus and consequently prevent the establishment of new infections not only in the uninfected individuals but also in the already infected individuals and thus reduce the overall adult worm burden and transmission. Importantly, an effective prophylactic treatment regimen can utilize drugs that are already part of the onchocerciasis elimination program (ivermectin), those being considered for MDA (moxidectin) [19,20], and/or the potential macrofilaricidal drugs (oxfendazole and emodepside) currently under clinical development [21].Prophylaxis of onchocerciasis is not a new concept. In the 1980s, once ivermectin was introduced as a “prophylactic” drug against the filarial dog heartworm, Dirofilaria immitis [22], its prophylactic effects were also examined in Onchocerca spp. In chimpanzees, a single dose of ivermectin (200 μg/kg) was highly protective (83% reduction in patent infections) when given at the time of the experimental infection and tracked for development of patency over 30 months. It was, however, much less effective (33% reduction in patent infections) when given 1 month postinfection with the L3s, at which time the L4s had already developed [23]. Moreover, monthly treatment with ivermectin at either 200 μg/kg or 500 μg/kg for 21 months completely protected naïve calves against the development of O. ochengi infection as compared to untreated controls, which were 83% positive for nodules and 100% positive for patency [24]. When naïve calves exposed to natural infection were treated with either ivermectin (150 μg/kg) or with moxidectin (200 μg/kg) monthly or quarterly, none of the animals developed detectable infections after 22 months of exposure, except 2 animals in the quarterly ivermectin treated group which had 1 nodule each; in the non-treated control group, the nodule prevalence was 78.6% [25]. These prophylactic studies in calves exposed to natural infections clearly demonstrated that monthly or quarterly treatments with ivermectin and/or moxidectin over 22 months were highly efficacious against the development of new infections. When ivermectin was administered in a highly endemic region of onchocerciasis in Cameroon every 3 months over a 4-year period, it resulted in reduced numbers of new nodules (17.7%) when compared to individuals who were treated annually. This recent study suggests that ivermectin may have also a better prophylactic effect in humans when administered quarterly [26].Importantly, moxidectin, a member of the macrocyclic lactone family of anthelmintic drugs, also used in veterinary medicine like ivermectin [20], was recently approved for the treatment of onchocerciasis as a microfilaricidal drug in individuals over the age of 12 [20]. In humans, a single dose of moxidectin (8 mg) appeared to be more efficacious than a single dose of ivermectin (150 μg/kg) in terms of lowering microfilarial loads [17]. Modeling has shown that an annual treatment with moxidectin and a biannual treatment with ivermectin would achieve similar reductions in the duration of the MDA programs when compared to an annual treatment with ivermectin [27].In our efforts to identify macrofilaricidal drugs, we tested a selection of drugs for their ability to inhibit the molting of O. volvulus L3 to L4 as part of the in vitro drug screening funnel [13,2831]. With some being highly effective, we decided to also examine the effects of the known MDA drugs and those already in clinical development for macrofilaricidal effects on molting of L3 and the motility of L4 (S1 Text) as potential “prophylactic” drugs. When ivermectin and moxidectin were evaluated, we found that both drugs were highly effective as inhibitors of molting: IC50 of 1.048 μM [918.86 ng/ml] and IC90 of 3.73 μM [2,949.1 ng/ml] for ivermectin and IC50 of 0.654 μM [418.43 ng/ml] and IC90 of 1.535 μM [985.3 ng/ml] for moxidectin (Table 1 and S1 Fig), with moxidectin being more effective than ivermectin. When both drugs were tested against the L4, we found that both drugs inhibited the motility of L4s after 6 days of treatment: Ivermectin had an IC50 of 1.38 μM [1,207.6 ng/ml] and IC90 of 31.45 μM [27,521.9 ng/ml] (Table 1 and S1 Fig), while moxidectin had an IC50 of 1.039 μM [665.4 ng/ml] and IC90 of approximately 30 μM [approximately 19,194 ng/ml] (Table 1 and S1 Fig). Interestingly, when the treatment of L4 with both drugs was prolonged, the IC50 values for the inhibition of L4 motility on day 11 with ivermectin and moxidectin were 0.444 μM and 0.380 μM, respectively. Significantly, from the prospect of employing both drugs for prophylaxis against new infections with O. volvulus, moxidectin (8 mg) has an advantage as it achieves a maximum plasma concentration of 77.2 ± 17.8 ng/ml, is metabolized minimally, and has a half-life time of 40.9 ± 18.25 days with an area under the curve (AUC) of 4,717 ± 1,494 ng*h/ml in healthy individuals [32], which covers the experimental IC50 achieved by moxidectin for inhibiting both L3 molting and L4 motility, and the IC90 for L3s. In comparison, ivermectin reaches a maximum plasma concentration of 54.4 ± 12.2 ng/ml with a half-life of 1.5 ± 0.43 days and an AUC of 3,180 ± 1,390 ng*h/ml in healthy humans [33], which only covers the IC50 for inhibiting molting of L3 and motility of L4. We therefore reason that based on the significantly improved pharmacokinetic profile of moxidectin and its efficacy against both L3 and L4 larvae in vitro (Table 1), it might have a better “prophylactic” profile than ivermectin for its potential to interrupt the development of new O. volvulus infections, and thus ultimately affect transmission and further support the elimination of onchocerciasis. Adding to moxidectin’s significance, in dogs, it is a highly effective prophylactic drug against ivermectin-resistant D. immitis strains [19], an important attribute in the event that a suboptimal responsiveness to ivermectin treatment becomes more widespread in the onchocerciasis endemic regions of Africa. Testing the potential effect of moxidectin on the viability or development of transmitted L3 larvae was already recommended by Awadzi and colleagues in 2014 [34], when the excellent half-life of moxidectin in patients with onchocerciasis was realized. We have to acknowledge, however, that the key parameters that can predict the potency of a drug is actually a combination of exposure (drug concentrations) at the site of action and the duration of that exposure that is above the determined IC50/IC90. As we have access to only the AUC, half-life, and Cmax data for each of the in vitro–tested drugs, the use of plasma concentrations for predicting the anticipated potency of these putative “prophylactic” drugs in vivo has to be further assessed with care during clinical trials.Table 1Inhibition of O. volvulus L3 molting and L4 motility in vitro by the prospective prophylactic drugs and their essential pharmacokinetic parameters at doses currently used or deemed safe for use in humans.
DrugIvermectinMoxidectinAlbendazoleOxfendazoleEmodepside
Albendazole sulfoxide
IC50 μM
(conc in ng/ml)
IC90 μM
(conc in ng/ml)
IC50 μM
(conc in ng/ml)
IC90 μM
(conc in ng/ml)
IC50 μM
(conc in ng/ml)
IC90 μM
(conc in ng/ml)
IC50 μM
(conc in ng/ml)
IC90 μM
(conc in ng/ml)
IC50 μM
(conc in ng/ml)
IC90 μM
(conc in ng/ml)
In vitro drug testing with O. volvulus larvaeInhibition of L3 moltinga1.048 (918.86 ng/ml)3.730 (2,949.1 ng/ml)0.654 (418.43 ng/ml)1.535 (985.3 ng/ml)0.007 (1.9 ng/ml)0.023 (5.8 ng/ml)0.034 (10.7 ng/ml)0.071 (22.4 ng/ml)0.0007 (0.8 ng/ml)0.002 (2.2 ng/ml)
0.008 (2.25 ng/ml)0.07 (19.69 ng/ml)
Inhibition of L4 motilityb1.38 (1,207 ng/ml)31.45 (27,521 ng/ml)1.039 (665 ng/ml)approximately 30 (approximately 19,194 ng/ml)>2 μM0.0005 (0.6 ng/ml)0.078 (87.3 ng/ml)
Pharmacokinetic profiles extracted from data collected during clinical trials in humanscDose150 μg/kg8 mg400 mg15 mg/kg30 mg/kg1 mg40 mg
Cmax (plasma) ng/ml54.4 ± 12.277.2 ± 17.824.5288d6,250 ± 1,3905,300 ± 1,69018.6434
Half-life t1/2 (h)36.6 ± 10.2981 ± 4381.538.56d9.97 ± 2.229.82 ± 3.4642.7392
AUC (ng*h/ml)3,180 ± 1,3904,717 ± 1,494733,418d99,500 ± 2,44078,300 ± 2,8301003,320
Citations[33][32]e[41][42][43]
Open in a separate windowaO. volvulus L3 obtained from infected Simulium sp. were washed and distributed at n = approximately 10 larvae per well and cocultured in contact with naïve human peripheral blood mononuclear cells for a period of 6 days with or without the respective drugs in vitro (S1 Text) and as previously described [13,30]. Ivermectin (PHR1380, Sigma-Aldrich, St. Louis, Missouri, United States of America) and moxidectin (PHR1827, Sigma-Aldrich) were tested in the range of 0.01–10 μM; albendazole (A4673, Sigma-Aldrich), albendazole sulfoxide (35395, Sigma-Aldrich), and oxfendazole (31476, Sigma-Aldrich) in the range of 1–3 μM; and emodepside (Bayer) in the range of 0.3–1 μM using 3-fold dilutions. On day 6, molting of L3 worms was recorded. Each condition was tested in duplicate and repeated at least once. The IC50 and IC90 were derived from nonlinear regression (curve fit) analysis on GraphPad Prism 6 with 95% confidence intervals.bL3s were allowed to molt to L4 in the presence of PBMCs and on day 6 when molting was complete the L4 larvae were collected and distributed at 6–8 worms per well and treated with the respective concentrations of drugs [ivermectin and moxidectin: 0.01–30 μM at 3-fold dilutions and emodepside: 0.03–3 μM at 10-fold dilutions and 10 μM] for a period of 6 days. Inhibition of O. volvulus L4 motility was recorded as described [13,30]; representative videos of motility and inhibited motility can be viewed in Voronin and colleagues [30], S1–S3 Videos. Each condition was tested in duplicate and repeated at least once. The IC50 and IC90 were derived from nonlinear regression (curve fit) analysis on GraphPad Prism 6 with 95% confidence intervals.cInformation regarding the pharmacokinetic profiles of each drug was extracted from public data collected during the corresponding clinical trial(s) in humans, which are also referenced.dPharmacokinetic parameters of albendazole sulfoxide, the predominant metabolite of albendazole.eAdditional pharmacokinetics parameters for moxidectin not only in heathy individual but also in those living in Africa can be found on the moxidectin FDA prescribing information website: https://www.drugs.com/pro/moxidectin.html. In patients with onchocerciasis, it is reported that a single dose of moxidectin (8 mg) achieves a maximum plasma concentration of 63.1 ± 20.0 ng/ml, and it has a half-life time of 559 ± 525 days with an AUC of 2,738 ± 1,606 ng*h/ml.AUC, area under the curve; Cmax, maximum plasma concentration.The prospects for identifying additional “prophylactic” drugs against O. volvulus increased when we tested 3 other drugs: albendazole, already in use for controlling helminth infections in humans; and oxfendazole and emodepside, being tested by the Drugs for Neglected Diseases initiative (DNDi) as potential repurposed macrofilaricidal drugs for human indications [21]. Albendazole is a primary drug of choice for MDA treatment of soil-transmitted helminths (STH; hookworms, whipworms [in combination with oxantel pamoate], and ascarids) [35], as well as for the elimination of lymphatic filariasis in Africa when used in combination with ivermectin [36]. Oxfendazole, a member of the benzimidazole family, is currently indicated for the treatment of a range of lung and gastrointestinal parasites in cattle and other veterinary parasites and is favorably considered for the treatment and control of helminth infections in humans [37]. Emodepside, an anthelmintic drug of the cyclooctadepsipeptide class, is used in combination with praziquantel to treat a range of gastrointestinal nematodes in dogs and cats [3840].We found that all 3 drugs were highly effective at inhibiting the molting of O. volvulus, even more than ivermectin or moxidectin. The IC50 for inhibition of L3 molting with albendazole was 7 nM [1.9 ng/ml], and the IC90 was 23 nM [5.8 ng/ml]. The IC50 for inhibition of L3 molting with oxfendazole was 34 nM [10.7 ng/ml], and the IC90 was 71 nM [22.4 ng/ml] (Table 1 and S1 Fig). Albendazole and oxfendazole were less effective at inhibiting the motility of L4s, both having IC50 >2 μM (Table 1). In previous studies, we reported that tubulin-binding drugs (flubendazole and oxfendazole) affected the motility of L4s and L5s only after repeated treatments over 14 days in culture [13,30]. Hence, both drugs might be more effective against L3s than L4s, a stage that may require prolonged treatments and further evaluation with future studies. Albendazole is used for STH treatment as a single dose of 400 mg. At this dose, it reaches a maximum plasma concentration of 24.5 ng/ml with a half-life time of 1.53 hours (AUC of 73 ng*h/ml) [41], which covers the IC90 for inhibition of L3 molting. In comparison, albendazole sulfoxide, an important active metabolite of albendazole, had a much improved maximum plasma concentration of 288 ng/ml with a half-life time of 8.56 hours (AUC of 3,418 ng*h/ml) than albendazole [41] (Table 1), and which covers the IC50 of 8 nM [2.25 ng/ml] and IC90 of 70 nM [19.69 ng/ml] for inhibition of L3 molting in vitro. Oxfendazole, when administered at the doses currently being tested for efficacy against trichuriasis (whipworm infection), 30 mg/kg and 15 mg/kg, achieved a maximum plasma concentration of 5,300 ± 1,690 and 6,250 ± 1,390 ng/ml, respectively, with a half-life time of approximately 9.9 hours (AUC: 78,300 ± 2,830 to 99,500 ± 2,440 ng*h/ml) (Table 1) [42], both of which cover the IC90 for inhibition of L3 molting. Hence, from the perspective of preventing newly established infections with O. volvulus L3 by inhibiting their molting, oxfendazole and albendazole are additional compelling candidates to consider.Intriguingly, emodepside was the most effective drug on both L3s and L4s; it inhibited molting with an IC50 of 0.7 nM [0.8 ng/ml] (which is 10, 48.5, and approximately 1,000 times more potent than albendazole, oxfendazole, and moxidectin, respectively) and an IC90 of 2 nM [2.2 ng/ml]. Importantly, it also inhibited the motility of L4s by day 6 with an IC50 of 0.5 nM [0.6 ng/ml] and an IC90 of 78 nM [87.3 ng/ml] (Table 1 and S1 Fig), which is also more potent than the other drugs. In the ascending dose (1 to 40 mg) human clinical trial (NCT02661178), emodepside achieved a maximum plasma concentration in the range of 18.6 to 595 ng/ml, AUC of 100 to 4,112 ng*h/ml, and half-life of 1.7 to 24.6 days depending on the dose administered, and all doses were well-tolerated (Table 1) [43]. Considering that the IC90 for inhibition of L3 molting and L4 motility in vitro are 2 nM and 78 nM (Table 1 and S1 Fig), respectively, these values are already covered by the PK profile of the drug starting at 2.5 mg. Hence, the clinical trials for emodepside as a macrofilaricidal drug, if efficacious at 2.5 mg or above, could have additional implications in terms of utilizing emodepside for prophylactic potential.We propose that all 5 drugs are effective against the early stages of O. volvulus based on their efficacy (IC50/IC90) in vitro. However, based on their known pharmacokinetic profiles in humans, they can be prioritized for future evaluation for their utility for prophylactic activity in humans as follows: emodepside > moxidectin > albendazole > oxfendazole > ivermectin. Moreover, we believe that the addition of some of these putative “prophylactic” drugs individually or in combination with the current MDA regimens against onchocerciasis would also align well with the integrated goals of the Expanded Special Project for Elimination of Neglected Tropical Diseases and possibly also expedite the elimination goals of one of the other 6 neglected tropical diseases amenable to MDA: the STH [44]. All 5 of these drugs are broad-spectrum anthelmintic drugs that are effective against STH infections [4549], and thus may also benefit MDA programs aimed at controlling STH infections. The effects of MDA with ivermectin or albendazole on STHs (hookworms, Ascaris lumbricoides, and Trichuris trichiura) have already been explored in clinical studies [45,47,50] and were shown to have a significant impact on the STH infection rates in the treated communities. One dose of moxidectin (8 mg) in combination with albendazole (400 mg) was as effective as a combination of albendazole and oxantel pamoate (currently the most efficacious treatment against T. trichiura) in reducing fecal T. trichiura egg counts [46]. Notably, oxfendazole is also being tested for its effectiveness in humans against trichuriasis (NCT03435718). Additionally, emodepside was shown to not only have a strong inhibitory activity against adult STH worms in animal models with an ED50 of less than 1.5 mg/kg, but also against STH larval stages in vitro with IC50 <4 μM for L3s [49].We could envision that a single drug, a combination of any of these 5 drugs, or just those we have prioritized (moxidectin and emodepside), when administered also for prophylaxis against the development of new O. volvulus infection, would also protect against new STH infections. Broad-spectrum chemoprophylaxis of nematode infections in humans could potentially also save on costs and time invested toward elimination of co-endemic parasites through the administration of a combination of drugs. Moreover, considering the time-consuming process of drug discovery, the heavy costs incurred, and the excessive failure rates, the prospect of repurposing commercially available drugs used for other human or veterinary diseases for the prophylaxis of O. volvulus infection is an attractive one [31,5154]. Repurposing of drugs could also accelerate the approval timeline for new drug indications since information regarding mechanism, dosing, toxicity, and metabolism would be readily available.In summary, our O. volvulus in vitro drug testing studies reinforce the “old” proposition of employing MDA drugs for prophylactic strategies as well, inhibiting the development of new infections with O. volvulus in the endemic regions under MDA. We report for the first time that in vitro, emodepside, moxidectin, and ivermectin have very promising inhibitory effect on both L3s and L4s, with albendazole and oxfendazole for additional consideration. Importantly, considering that the L4 larvae are longer lived as compared to the L3 stage, and hence the more feasible target against the establishment of new infections, we believe that targeting the L4 stage would be an invaluable tool toward advancing sustainable elimination goals for onchocerciasis. Moxidectin and emodepside with their superior half-life and pharmacokinetic profiles in humans and their efficacy in vitro against both L3 and L4 stages of the parasite seem to show the most promise for this purpose. Of significance, the doses required to provide exposures that would cover the IC90 achieved by these 2 drugs in vitro against L3 and emodepside against L4 have been shown to be well-tolerated in humans (Table 1). Crucially, as these new drugs are rolled out for human use as microfilaricidal and/or macrofilaricidal drugs, it would be important to add to the clinical protocols to also observe their effects on the development of new infections in populations that are exposed to active transmission using serological assays that can predict new infections and distinguish them from earlier infections [55]. This could potentially reveal valuable information to foster the development of more complementary elimination programs that not only target the microfilariae (moxidectin) and the adult worms (emodepside) but also the other infectious stages of the parasite, with their effects on STH being an added advantage.Mathematical modeling has long influenced the design of intervention policies for onchocerciasis and predicted the potential outcomes of various regimens used by the elimination programs and the feasibility of elimination [5660]. We believe that a revised mathematical model that also takes into account the additional aspect of targeting L3 and L4 stages could be helpful to assess the enhanced impact this complementary tool might have in advancing the goal of elimination, and accordingly support a revised policy for operational intervention programs first for onchocerciasis, and perhaps also as a pan-nematode control measure, by the decision-making bodies [7,61,62]. Given that in human clinical trials in which infected people were treated quarterly with ivermectin, there was an indication of a considerable trend of reduced number of newly formed nodules, it becomes apparent that the recommendation for such a revised regimen might also support protection from new infections. Clinical trials to assess the efficacy of biannual doses of ivermectin or moxidectin versus annual doses of these drugs against onchocerciasis have been already initiated (NCT03876262). Alternatively, increasing the frequency of future treatments with moxidectin and/or emodepside to biannual or quarterly treatment and/or using them in combinations could also improve their chemotherapeutic potential by targeting multiple stages of the parasite, thus increasing all the control potential of these new MDA drugs on multiple stages of the parasite and ultimately support not only a faster timeline but also sustained elimination.  相似文献   

10.
This Formal Comment provides clarifications on the authors’ recent estimates of global bacterial diversity and the current status of the field, and responds to a Formal Comment from John Wiens regarding their prior work.

We welcome Wiens’ efforts to estimate global animal-associated bacterial richness and thank him for highlighting points of confusion and potential caveats in our previous work on the topic [1]. We find Wiens’ ideas worthy of consideration, as most of them represent a step in the right direction, and we encourage lively scientific discourse for the advancement of knowledge. Time will ultimately reveal which estimates, and underlying assumptions, came closest to the true bacterial richness; we are excited and confident that this will happen in the near future thanks to rapidly increasing sequencing capabilities. Here, we provide some clarifications on our work, its relation to Wiens’ estimates, and the current status of the field.First, Wiens states that we excluded animal-associated bacterial species in our global estimates. However, thousands of animal-associated samples were included in our analysis, and this was clearly stated in our main text (second paragraph on page 3).Second, Wiens’ commentary focuses on “S1 Text” of our paper [1], which was rather peripheral, and, hence, in the Supporting information. S1 Text [1] critically evaluated the rationale underlying previous estimates of global bacterial operational taxonomic unit (OTU) richness by Larsen and colleagues [2], but the results of S1 Text [1] did not in any way flow into the analyses presented in our main article. Indeed, our estimates of global bacterial (and archaeal) richness, discussed in our main article, are based on 7 alternative well-established estimation methods founded on concrete statistical models, each developed specifically for richness estimates from multiple survey data. We applied these methods to >34,000 samples from >490 studies including from, but not restricted to, animal microbiomes, to arrive at our global estimates, independently of the discussion in S1 Text [1].Third, Wiens’ commentary can yield the impression that we proposed that there are only 40,100 animal-associated bacterial OTUs and that Cephalotes in particular only have 40 associated bacterial OTUs. However, these numbers, mentioned in our S1 Text [1], were not meant to be taken as proposed point estimates for animal-associated OTU richness, and we believe that this was clear from our text. Instead, these numbers were meant as examples to demonstrate how strongly the estimates of animal-associated bacterial richness by Larsen and colleagues [2] would decrease simply by (a) using better justified mathematical formulas, i.e., with the same input data as used by Larsen and colleagues [2] but founded on an actual statistical model; (b) accounting for even minor overlaps in the OTUs associated with different animal genera; and/or (c) using alternative animal diversity estimates published by others [3], rather than those proposed by Larsen and colleagues [2]. Specifically, regarding (b), Larsen and colleagues [2] (pages 233 and 259) performed pairwise host species comparisons within various insect genera (for example, within the Cephalotes) to estimate on average how many bacterial OTUs were unique to each host species, then multiplied that estimate with their estimated number of animal species to determine the global animal-associated bacterial richness. However, since their pairwise host species comparisons were restricted to congeneric species, their estimated number of unique OTUs per host species does not account for potential overlaps between different host genera. Indeed, even if an OTU is only found “in one” Cephalotes species, it might not be truly unique to that host species if it is also present in members of other host genera. To clarify, we did not claim that all animal genera can share bacterial OTUs, but instead considered the implications of some average microbiome overlap (some animal genera might share no bacteria, and other genera might share a lot). The average microbiome overlap of 0.1% (when clustering bacterial 16S sequences into OTUs at 97% similarity) between animal genera used in our illustrative example in S1 Text [1] is of course speculative, but it is not unreasonable (see our next point). A zero overlap (implicitly assumed by Larsen and colleagues [2]) is almost certainly wrong. One goal of our S1 Text [1] was to point out the dramatic effects of such overlaps on animal-associated bacterial richness estimates using “basic” mathematical arguments.Fourth, Wiens’ commentary could yield the impression that existing data are able to tell us with sufficient certainty when a bacterial OTU is “unique” to a specific animal taxon. However, so far, the microbiomes of only a minuscule fraction of animal species have been surveyed. One can thus certainly not exclude the possibility that many bacterial OTUs currently thought to be “unique” to a certain animal taxon are eventually also found in other (potentially distantly related) animal taxa, for example, due to similar host diets and or environmental conditions [47]. As a case in point, many bacteria in herbivorous fish guts were found to be closely related to bacteria in mammals [8], and Song and colleagues [6] report that bat microbiomes closely resemble those of birds. The gut microbiome of caterpillars consists mostly of dietary and environmental bacteria and is not species specific [4]. Even in animal taxa with characteristic microbiota, there is a documented overlap across host species and genera. For example, there are a small number of bacteria consistently and specifically associated with bees, but these are found across bee genera at the level of the 99.5% similar 16S rRNA OTUs [5]. To further illustrate that an average microbiome overlap between animal taxa at least as large as the one considered in our S1 Text (0.1%) [1] is not unreasonable, we analyzed 16S rRNA sequences from the Earth Microbiome Project [6,9] and measured the overlap of microbiota originating from individuals of different animal taxa. We found that, on average, 2 individuals from different host classes (e.g., 1 mammalian and 1 avian sample) share 1.26% of their OTUs (16S clustered at 100% similarity), and 2 individuals from different host genera belonging to the same class (e.g., 2 mammalian samples) share 2.84% of their OTUs (methods in S1 Text of this response). A coarser OTU threshold (e.g., 97% similarity, considered in our original paper [1]) would further increase these average overlaps. While less is known about insect microbiomes, there is currently little reason to expect a drastically different picture there, and, as explained in our S1 Text [1], even a small average microbiome overlap of 0.1% between host genera would strongly limit total bacterial richness estimates. The fact that the accumulation curve of detected bacterial OTUs over sampled insect species does not yet strongly level off says little about where the accumulation curve would asymptotically converge; rigorous statistical methods, such as the ones used for our global estimates [1], would be needed to estimate this asymptote.Lastly, we stress that while the present conversation (including previous estimates by Louca and colleagues [1], Larsen and colleagues [2], Locey and colleagues [10], Wiens’ commentary, and this response) focuses on 16S rRNA OTUs, it may well be that at finer phylogenetic resolutions, e.g., at bacterial strain level, host specificity and bacterial richness are substantially higher. In particular, future whole-genome sequencing surveys may well reveal the existence of far more genomic clusters and ecotypes than 16S-based OTUs.  相似文献   

11.
12.
Coral reefs on remote islands and atolls are less exposed to direct human stressors but are becoming increasingly vulnerable because of their development for geopolitical and military purposes. Here we document dredging and filling activities by countries in the South China Sea, where building new islands and channels on atolls is leading to considerable losses of, and perhaps irreversible damages to, unique coral reef ecosystems. Preventing similar damage across other reefs in the region necessitates the urgent development of cooperative management of disputed territories in the South China Sea. We suggest using the Antarctic Treaty as a positive precedent for such international cooperation.Coral reefs constitute one of the most diverse, socioeconomically important, and threatened ecosystems in the world [13]. Coral reefs harbor thousands of species [4] and provide food and livelihoods for millions of people while safeguarding coastal populations from extreme weather disturbances [2,3]. Unfortunately, the world’s coral reefs are rapidly degrading [13], with ~19% of the total coral reef area effectively lost [3] and 60% to 75% under direct human pressures [3,5,6]. Climate change aside, this decline has been attributed to threats emerging from widespread human expansion in coastal areas, which has facilitated exploitation of local resources, assisted colonization by invasive species, and led to the loss and degradation of habitats directly and indirectly through fishing and runoff from agriculture and sewage systems [13,57]. In efforts to protect the world’s coral reefs, remote islands and atolls are often seen as reefs of “hope,” as their isolation and uninhabitability provide de facto protection against direct human stressors, and may help impacted reefs through replenishment [5,6]. Such isolated reefs may, however, still be vulnerable because of their geopolitical and military importance (e.g., allowing expansion of exclusive economic zones and providing strategic bases for military operations). Here we document patterns of reclamation (here defined as creating new land by filling submerged areas) of atolls in the South China Sea, which have resulted in considerable loss of coral reefs. We show that conditions are ripe for reclamation of more atolls, highlighting the need for international cooperation in the protection of these atolls before more unique and ecologically important biological assets are damaged, potentially irreversibly so.Studies of past reclamations and reef dredging activities have shown that these operations are highly deleterious to coral reefs [8,9]. First, reef dredging affects large parts of the surrounding reef, not just the dredged areas themselves. For example, 440 ha of reef was completely destroyed by dredging on Johnston Island (United States) in the 1960s, but over 2,800 ha of nearby reefs were also affected [10]. Similarly, at Hay Point (Australia) in 2006 there was a loss of coral cover up to 6 km away from dredging operations [11]. Second, recovery from the direct and indirect effects of dredging is slow at best and nonexistent at worst. In 1939, 29% of the reefs in Kaneohe Bay (United States) were removed by dredging, and none of the patch reefs that were dredged had completely recovered 30 years later [12]. In Castle Harbour (Bermuda), reclamation to build an airfield in the early 1940s led to limited coral recolonization and large quantities of resuspended sediments even 32 years after reclamation [13]; several fish species are claimed extinct as a result of this dredging [14,15]. Such examples and others led Hatcher et al. [8] to conclude that dredging and land clearing, as well as the associated sedimentation, are possibly the most permanent of anthropogenic impacts on coral reefs.The impacts of dredging for the Spratly Islands are of particular concern because the geographical position of these atolls favors connectivity via stepping stones for reefs over the region [1619] and because their high biodiversity works as insurance for many species. In an extensive review of the sparse and limited data available for the region, Hughes et al. [20] showed that reefs on offshore atolls in the South China Sea were overall in better condition than near-shore reefs. For instance, by 2004 they reported average coral covers of 64% for the Spratly Islands and 68% for the Paracel Islands. By comparison, coral reefs across the Indo-Pacific region in 2004 had average coral covers below 25% [21]. Reefs on isolated atolls can still be prone to extensive bleaching and mortality due to global climate change [22] and, in the particular case of atolls in the South China Sea, the use of explosives and cyanine [20]. However, the potential for recovery of isolated reefs to such stressors is remarkable. Hughes et al. [20] documented, for instance, how coral cover in several offshore reefs in the region declined from above 80% in the early 1990s to below 6% by 1998 to 2001 (due to a mixture of El Niño and damaging fishing methods that make use of cyanine and explosives) but then recovered to 30% on most reefs and up to 78% in some reefs by 2004–2008. Another important attribute of atolls in the South China Sea is the great diversity of species. Over 6,500 marine species are recorded for these atolls [23], including some 571 reef coral species [24] (more than half of the world’s known species of reef-building corals). The relatively better health and high diversity of coral reefs in atolls over the South China Sea highlights the uniqueness of such reefs and the important roles they may play for reefs throughout the entire region. Furthermore, these atolls are safe harbor for some of the last viable populations of highly threatened species (e.g., Bumphead Parrotfish [Bolbometopon muricatum] and several species of sawfishes [Pristis, Anoxypristis]), highlighting how dredging in the South China Sea may threaten not only species with extinction but also the commitment by countries in the region to biodiversity conservation goals such as the Convention of Biological Diversity Aichi Targets and the United Nations Sustainable Development Goals.Recently available remote sensing data (i.e., Landsat 8 Operational Land Imager and Thermal Infrared Sensors Terrain Corrected images) allow quantification of the sharp contrast between the gain of land and the loss of coral reefs resulting from reclamation in the Spratly Islands (Fig 1). For seven atolls recently reclaimed by China in the Spratly Islands (names provided in Fig 1D, S1 Data for details); the area of reclamation is the size of visible areas in Landsat band 6, as prior to reclamation most of the atolls were submerged, with the exception of small areas occupied by a handful of buildings on piers (note that the amount of land area was near zero at the start of the reclamation; Fig 1C, S1 Data). The seven reclaimed atolls have effectively lost ~11.6 km2 (26.9%) of their reef area for a gain of ~10.7 km2 of land (i.e., >75 times increase in land area) from February 2014 to May 2015 (Fig 1C). The area of land gained was smaller than the area of reef lost because reefs were lost not only through land reclamation but also through the deepening of reef lagoons to allow boat access (Fig 1B). Similar quantification of reclamation by other countries in the South China Sea (Fig 1Reclamation leads to gains of land in return for losses of coral reefs: A case example of China’s recent reclamation in the Spratly Islands.Table 1List of reclaimed atolls in the Spratly Islands and the Paracel Islands.The impacts of reclamation on coral reefs are likely more severe than simple changes in area, as reclamation is being achieved by means of suction dredging (i.e., cutting and sucking materials from the seafloor and pumping them over land). With this method, reefs are ecologically degraded and denuded of their structural complexity. Dredging and pumping also disturbs the seafloor and can cause runoff from reclaimed land, which generates large clouds of suspended sediment [11] that can lead to coral mortality by overwhelming the corals’ capacity to remove sediments and leave corals susceptible to lesions and diseases [7,9,25]. The highly abrasive coralline sands in flowing water can scour away living tissue on a myriad of species and bury many organisms beyond their recovery limits [26]. Such sedimentation also prevents new coral larvae from settling in and around the dredged areas, which is one of the main reasons why dredged areas show no signs of recovery even decades after the initial dredging operations [9,12,13]. Furthermore, degradation of wave-breaking reef crests, which make reclamation in these areas feasible, will result in a further reduction of coral reefs’ ability to (1) self-repair and protect against wave abrasion [27,28] (especially in a region characterized by typhoons) and (2) keep up with rising sea levels over the next several decades [29]. This suggests that the new islands would require periodic dredging and filling, that these reefs may face chronic distress and long-term ecological damage, and that reclamation may prove economically expensive and impractical.The potential for land reclamation on other atolls in the Spratly Islands is high, which necessitates the urgent development of cooperative management of disputed territories in the South China Sea. First, the Spratly Islands are rich in atolls with similar characteristics to those already reclaimed (Fig 1D); second, there are calls for rapid development of disputed territories to gain access to resources and increase sovereignty and military strength [30]; and third, all countries with claims in the Spratly Islands have performed reclamation in this archipelago (20]. One such possibility is the generation of a multinational marine protected area [16,17]. Such a marine protected area could safeguard an area of high biodiversity and importance to genetic connectivity in the Pacific, in addition to promoting peace in the region (extended justification provided by McManus [16,17]). A positive precedent for the creation of this protected area is that of Antarctica, which was also subject to numerous overlapping claims and where a recently renewed treaty froze national claims, preventing large-scale ecological damage while providing environmental protection and areas for scientific study. Development of such a legal framework for the management of the Spratly Islands could prevent conflict, promote functional ecosystems, and potentially result in larger gains (through spillover, e.g. [31]) for all countries involved.  相似文献   

13.
14.
In the last 15 years, antiretroviral therapy (ART) has been the most globally impactful life-saving development of medical research. Antiretrovirals (ARVs) are used with great success for both the treatment and prevention of HIV infection. Despite these remarkable advances, this epidemic grows relentlessly worldwide. Over 2.1 million new infections occur each year, two-thirds in women and 240,000 in children. The widespread elimination of HIV will require the development of new, more potent prevention tools. Such efforts are imperative on a global scale. However, it must also be recognised that true containment of the epidemic requires the development and widespread implementation of a scientific advancement that has eluded us to date—a highly effective vaccine. Striving for such medical advances is what is required to achieve the end of AIDS.In the last 15 years, antiretroviral therapy (ART) has been the most globally impactful life-saving development of medical research. Antiretrovirals (ARVs) are used with great success for both the treatment and prevention of HIV infection. In the United States, the widespread implementation of combination ARVs led to the virtual eradication of mother-to-child transmission of HIV from 1,650 cases in 1991 to 110 cases in 2011, and a turnaround in AIDS deaths from an almost 100% five-year mortality rate to a five-year survival rate of 91% in HIV-infected adults [1]. Currently, the estimated average lifespan of an HIV-infected adult in the developed world is well over 40 years post-diagnosis. Survival rates in the developing world, although lower, are improving: in sub-Saharan Africa, AIDS deaths fell by 39% between 2005 and 2013, and the biggest decline, 51%, was seen in South Africa [2].Furthermore, the association between ART, viremia, and transmission has led to the concept of “test and treat,” with the hope of reducing community viral load by testing early and initiating treatment as soon as a diagnosis of HIV is made [3]. Indeed, selected regions of the world have begun to actualize the public health value of ARVs, from gains in life expectancy to impact on onward transmission, with a potential 1% decline in new infections for every 10% increase in treatment coverage [2]. In September 2015, WHO released new guidelines removing all limitations on eligibility for ART among people living with HIV and recommending pre-exposure prophylaxis (PrEP) to population groups at significant HIV risk, paving the way for a global onslaught on HIV [4].Despite these remarkable advances, this epidemic grows relentlessly worldwide. Over 2.1 million new infections occur each year, two-thirds in women and 240,000 in children [2]. In heavily affected countries, HIV infection rates have only stabilized at best: the annualized acquisition rates in persons in their first decade of sexual activity average 3%–5% yearly in southern Africa [57]. These figures are hardly compatible with the international health community’s stated goal of an “AIDS-free generation” [8,9]. In highly resourced settings, microepidemics of HIV still occur, particularly among gays, bisexuals, and men who have sex with men (MSM) [10]. HIV epidemics are expanding in two geographic regions in 2015—the Middle East/North Africa and Eastern Europe/Central Asia—largely due to challenges in implementing evidence-based HIV policies and programmes [2]. Even for the past decade in the US, almost 50,000 new cases recorded annually, two-thirds among MSM, has been a stable figure for years and shows no evidence of declining [1].While treatment scale-up, medical male circumcision [11], and the implementation of strategies to prevent mother-to-child transmission [12] have received global traction, systemic or topical ARV-based biomedical advances to prevent sexual acquisition of HIV have, as yet, made limited impressions on a population basis, despite their reported efficacy. Factors such as their adherence requirements, cost, potential for drug resistance, and long-term feasibility have restricted the appetite for implementation, even though these approaches may reduce HIV incidence in select populations.Already, several trials have shown that daily oral administration of the ARV tenofovir disoproxil fumarate (TDF), taken singly or in combination with emtricitabine, as PrEP by HIV-uninfected individuals, reduces HIV acquisition among serodiscordant couples (where one partner is HIV-positive and the other is HIV-negative) [13], MSM [14], at-risk men and women [15], and people who inject drugs [16,17] by between 44% and 75%. Long-acting injectable antiretroviral agents such as rilpivirine and cabotegravir, administered every two and three months, respectively, are also being developed for PrEP. All of these PrEP approaches are dependent on repeated HIV testing and adherence to drug regimens, which may challenge effectiveness in some populations and contexts.The widespread elimination of HIV will require the development of new, more potent prevention tools. Because HIV acquisition occurs subclinically, the elimination of HIV on a population basis will require a highly effective vaccine. Alternatively, if vaccine development is delayed, supplementary strategies may include long-acting pre-exposure antiretroviral cocktails and/or the administration of neutralizing antibodies through long-lasting parenteral preparations or the development of a “genetic immunization” delivery system, as well as scaling up delivery of highly effective regimens to eliminate mother-to-child HIV transmission (Fig 1).Open in a separate windowFig 1Medical interventions required to end the epidemic of HIV.Image credit: Glenda Gray.  相似文献   

15.
The hippocampus has unique access to neuronal activity across all of the neocortex. Yet an unanswered question is how the transfer of information between these structures is gated. One hypothesis involves temporal-locking of activity in the neocortex with that in the hippocampus. New data from the Matthew E. Diamond laboratory shows that the rhythmic neuronal activity that accompanies vibrissa-based sensation, in rats, transiently locks to ongoing hippocampal θ-rhythmic activity during the sensory-gathering epoch of a discrimination task. This result complements past studies on the locking of sniffing and the θ-rhythm as well as the relation of sniffing and whisking. An overarching possibility is that the preBötzinger inspiration oscillator, which paces whisking, can selectively lock with the θ-rhythm to traffic sensorimotor information between the rat’s neocortex and hippocampus.The hippocampus lies along the margins of the cortical mantle and has unique access to neuronal activity across all of the neocortex. From a functional perspective, the hippocampus forms the apex of neuronal processing in mammals and is a key element in the short-term working memory, where neuronal signals persist for tens of seconds, that is independent of the frontal cortex (reviewed in [1,2]). Sensory information from multiple modalities is highly transformed as it passes from primary and higher-order sensory areas to the hippocampus. Several anatomically defined regions that lie within the temporal lobe take part in this transformation, all of which involve circuits with extensive recurrent feedback connections (reviewed in [3]) (Fig 1). This circuit motif is reminiscent of the pattern of connectivity within models of associative neuronal networks, whose dynamics lead to the clustering of neuronal inputs to form a reduced set of abstract representations [4] (reviewed in [5]). The first way station in the temporal lobe contains the postrhinal and perirhinal cortices, followed by the medial and lateral entorhinal cortices. Of note, olfactory input—which, unlike other senses, has no spatial component to its representation—has direct input to the lateral entorhinal cortex [6]. The third structure is the hippocampus, which contains multiple substructures (Fig 1).Open in a separate windowFig 1Schematic view of the circuitry of the temporal lobe and its connections to other brain areas of relevance.Figure abstracted from published results [715]. Composite illustration by Julia Kuhl.The specific nature of signal transformation and neuronal computations within the hippocampus is largely an open issue that defines the agenda of a great many laboratories. Equally vexing is the nature of signal transformation as the output leaves the hippocampus and propagates back to regions in the neocortex (Fig 1)—including the medial prefrontal cortex, a site of sensory integration and decision-making—in order to influence perception and motor action. The current experimental data suggest that only some signals within the sensory stream propagate into and out of the hippocampus. What regulates communication with the hippocampus or, more generally, with structures within the temporal lobe? The results from studies in rats and mice suggest that the most parsimonious hypothesis, at least for rodents, involves the rhythmic nature of neuronal activity at the so-called θ-rhythm [16], a 5–10 Hz oscillation (reviewed in [17]). The origin of the rhythm is not readily localized to a single locus [10], but certainly involves input from the medial septum [17] (a member of the forebrain cholinergic system) as well as from the supramammillary nucleus [10,18] (a member of the hypothalamus). The medial septum projects broadly to targets in the hippocampus and entorhinal cortex (Fig 1) [10]. Many motor actions, such as the orofacial actions of sniffing, whisking, and licking, occur within the frequency range of the θ-rhythm [19,20]. Thus, sensory input that is modulated by rhythmic self-motion can, in principle, phase-lock with hippocampal activity at the θ-rhythm to ensure the coherent trafficking of information between the relevant neocortical regions and temporal lobe structures [2123].We now shift to the nature of orofacial sensory inputs, specifically whisking and sniffing, which are believed to dominate the world view of rodents [19]. Recent work identified a premotor nucleus in the ventral medulla, named the vibrissa region of the intermediate reticular zone, whose oscillatory output is necessary and sufficient to drive rhythmic whisking [24]. While whisking can occur independently of breathing, sniffing and whisking are synchronized in the curious and aroused animal [24,25], as the preBötzinger complex in the medulla [26]—the oscillator for inspiration—paces whisking at nominally 5–10 Hz through collateral projections [27]. Thus, for the purposes of reviewing evidence for the locking of orofacial sensory inputs to the hippocampal θ-rhythm, we confine our analysis to aroused animals that function with effectively a single sniff/whisk oscillator [28].What is the evidence for the locking of somatosensory signaling by the vibrissae to the hippocampal θ-rhythm? The first suggestion of phase locking between whisking and the θ-rhythm was based on a small sample size [29,30], which allowed for the possibility of spurious correlations. Phase locking was subsequently reexamined, using a relatively large dataset of 2 s whisking epochs, across many animals, as animals whisked in air [31]. The authors concluded that while whisking and the θ-rhythm share the same spectral band, their phases drift incoherently. Yet the possibility remained that phase locking could occur during special intervals, such as when a rat learns to discriminate an object with its vibrissae or when it performs a memory-based task. This set the stage for a further reexamination of this issue across different epochs in a rewarded task. Work from Diamond''s laboratory that is published in the current issue of PLOS Biology addresses just this point in a well-crafted experiment that involves rats trained to perform a discrimination task.Grion, Akrami, Zuo, Stella, and Diamond [32] trained rats to discriminate between two different textures with their vibrissae. The animals were rewarded if they turned to a water port on the side that was paired with a particular texture. Concurrent with this task, the investigators also recorded the local field potential in the hippocampus (from which they extracted the θ-rhythm), the position of the vibrissae (from which they extracted the evolution of phase in the whisk cycle), and the spiking of units in the vibrissa primary sensory cortex. Their first new finding is a substantial increase in the amplitude of the hippocampal field potential at the θ-rhythm frequency—approximately 10 Hz for the data of Fig 2A—during the two, approximately 0.5 s epochs when the animal approaches the textures and whisks against it. There is significant phase locking between whisking and the hippocampal θ-rhythm during both of these epochs (Fig 2B), as compared to a null hypothesis of whisking while the animal whisked in air outside the discrimination zone. Unfortunately, the coherence between whisking and the hippocampal θ-rhythm could not be ascertained during the decision, i.e., turn and reward epochs. Nonetheless, these data show that the coherence between whisking and the hippocampal θ-rhythm is closely aligned to epochs of active information gathering.Open in a separate windowFig 2Summary of findings on the θ-rhythm in a rat during a texture discrimination task, derived from reference [32]. (A) Spectrogram showing the change in spectral power of the local field potential in the hippocampal area CA1 before, during, and after a whisking-based discrimination task. (B) Summary index of the increase in coherence between the band-limited hippocampal θ-rhythm and whisking signals during approach of the rat to the stimulus and subsequent touch. The index reports sin(ϕHϕW)2+cos(ϕHϕW)2, where ɸH and ɸW are the instantaneous phase of the hippocampal and whisking signals, respectively, and averaging is over all trials and animals. (C) Summary indices of the increase in coherence between the band-limited hippocampal θ-rhythm and the spiking signal in the vibrissa primary sensory cortex (“barrel cortex”). The magnitude of the index for each neuron is plotted versus phase in the θ-rhythm. The arrows show the concentration of units around the mean phase—black arrows for the vector average across only neurons with significant phase locking (solid circles) and gray arrows for the vector average across all neurons (open and closed circles). The concurrent positions of the vibrissae are indicated. The vector average is statistically significant only for the approach (p < 0.0001) and touch (p = 0.04) epochs.The second finding by Grion, Akrami, Zuo, Stella, and Diamond [32] addresses the relationship between spiking activity in the vibrissa primary sensory cortex and the hippocampal θ-rhythm. The authors find that spiking is essentially independent of the θ-rhythm outside of the task (foraging in Fig 2C), similar to the result for whisking and the θ-rhythm (Fig 2B). They observe strong coherence between spiking and the θ-rhythm during the 0.5 s epoch when the animal approaches the textures (approach in Fig 2C), yet reduced (but still significant) coherence during the touch epoch (touch in Fig 2C). The latter result is somewhat surprising, given past work from a number of laboratories that observe spiking in the primary sensory cortex and whisking to be weakly yet significantly phase-locked during exploratory whisking [3337]. Perhaps overtraining leads to only a modest need for the transfer of sensory information to the hippocampus. Nonetheless, these data establish that phase locking of hippocampal and sensory cortical activity is essentially confined to the epoch of sensory gathering.Given the recent finding of a one-to-one locking of whisking and sniffing [24], we expect to find direct evidence for the phase locking of sniffing and the θ-rhythm. Early work indeed reported such phase locking [38] but, as in the case of whisking [29], this may have been a consequence of too small a sample and, thus, inadequate statistical power. However, Macrides, Eichenbaum, and Forbes [39] reexamined the relationship between sniffing and the hippocampal θ-rhythm before, during, and after animals sampled an odorant in a forced-choice task. They found evidence that the two rhythms phase-lock within approximately one second of the sampling epoch. We interpret this locking to be similar to that seen in the study by Diamond and colleagues (Fig 2B) [32]. All told, the combined data for sniffing and whisking by the aroused rodent, as accumulated across multiple laboratories, suggest that two oscillatory circuits—the supramammillary nucleus and medial septum complex that drives the hippocampal θ-rhythm and the preBötzinger complex that drives inspiration and paces the whisking oscillator during sniffing (Fig 1)—can phase-lock during epochs of gathering sensory information and likely sustain working memory.What anatomical pathway can lead to phase locking of these two oscillators? The electrophysiological study of Tsanov, Chah, Reilly, and O’Mara [9] supports a pathway from the medial septum, which is driven by the supramammillary nucleus, to dorsal pontine nuclei in the brainstem. The pontine nucleus projects to respiratory nuclei and, ultimately, the preBötzinger oscillator (Fig 1). This unidirectional pathway can, in principle, entrain breathing and whisking. Phase locking is not expected to occur during periods of basal breathing, when the breathing rate and θ-rhythm occur at highly incommensurate frequencies. However, it remains unclear why phase locking occurs only during a selected epoch of a discrimination task, whereas breathing and the θ-rhythm occupy the same frequency band during the epochs of approach, as well as touch-based target selection (Fig 2A). While a reafferent pathway provides the rat with information on self-motion of the vibrissae (Fig 1), it is currently unknown whether that information provides feedback for phase locking.A seeming requirement for effective communication between neocortical and hippocampal processing is that phase locking must be achieved at all possible phases of the θ-rhythm. Can multiple phase differences between sensory signals and the hippocampal θ-rhythm be accommodated? Two studies report that the θ-rhythm undergoes a systematic phase-shift along the dorsal–ventral axis of the hippocampus [40,41], although the full extent of this shift is only π radians [41]. In addition, past work shows that vibrissa input during whisking is represented among all phases of the sniff/whisk cycle, at levels from primary sensory neurons [42,43] through thalamus [44,45] and neocortex [3337], with a bias toward retraction from the protracted position. A similar spread in phase occurs for olfactory input, as observed at the levels of the olfactory bulb [46] and cortex [47]. Thus, in principle, the hippocampus can receive, transform, and output sensory signals that arise over all possible phases in the sniff/whisk cycle. In this regard, two signals that are exactly out-of-phase by π radians can phase-lock as readily as signals that are in-phase.What are the constraints for phase locking to occur within the observed texture identification epochs? For a linear system, the time to lock between an external input and hippocampal theta depends on the observed spread in the spectrum of the θ-rhythm. This is estimated as Δf ~3 Hz (half-width at half-maximum amplitude), implying a locking time on the order of 1/Δf ~0.3 s. This is consistent with the approximate one second of enhanced θ-rhythm activity observed in the study by Diamond and colleagues (Fig 2A) [32] and in prior work [39,48] during a forced-choice task with rodents.Does the θ-rhythm also play a role in the gating of output from the hippocampus to areas of the neocortex? Siapas, Lubenov, and Wilson [48] provided evidence that hippocampal θ-rhythm phase-locks to electrical activity in the medial prefrontal cortex, a site of sensory integration as well as decision-making. Subsequent work [4951] showed that the hippocampus drives the prefrontal cortex, consistent with the known unidirectional connectivity between Cornu Ammonis area 1 (CA1) of the hippocampus and the prefrontal cortex [11] (Fig 1). Further, phase locking of hippocampal and prefrontal cortical activity is largely confined to the epoch of decision-making, as opposed to the epoch of sensory gathering. Thus, over the course of approximately one second, sensory information flows into and then out of the hippocampus, gated by phase coherence between rhythmic neocortical and hippocampal neuronal activity.It is of interest that the medial prefrontal cortex receives input signals from sensory areas in the neocortex [52] as well as a transformed version of these input signals via the hippocampus (Fig 1). Yet it remains to be determined if this constitutes a viable hub for the comparison of the original and transformed signals. In particular, projections to the medial prefrontal cortex arise from the ventral hippocampus [2], while studies on the phase locking of hippocampal θ-rhythm to prefrontal neocortical activity were conducted in dorsal hippocampus, where the strength of the θ-rhythm is strong compared to the ventral end [53]. Therefore, similar recordings need to be performed in the ventral hippocampus. An intriguing possibility is that the continuous phase-shift of the θ-rhythm along the dorsal to the ventral axis of the hippocampus [40,41] provides a means to encode the arrival of novel inputs from multiple sensory modalities relative to a common clock.A final issue concerns the locking between sensory signals and hippocampal neuronal activity in species that do not exhibit a continuous θ-rhythm, with particular reference to bats [5456] and primates [5760]. One possibility is that only the up and down swings of neuronal activity about a mean are important, as opposed to the rhythm per se. In fact, for animals in which orofacial input plays a relatively minor role compared to rodents, such a scheme of clocked yet arrhythmic input may be a necessity. In this case, the window of processing is set by a stochastic interval between transitions, as opposed to the periodicity of the θ-rhythm. This may imply that up/down swings of neuronal activity may drive hippocampal–neocortical communications in all species, with communication mediated via phase-locked oscillators in rodents and via synchronous fluctuations in bats and primates. The validity of this scheme and its potential consequence on neuronal computation remains an open issue and a focus of ongoing research.  相似文献   

16.
Multicellular eukaryotes can perform functions that exceed the possibilities of an individual cell. These functions emerge through interactions between differentiated cells that are precisely arranged in space. Bacteria also form multicellular collectives that consist of differentiated but genetically identical cells. How does the functionality of these collectives depend on the spatial arrangement of the differentiated bacteria? In a previous issue of PLOS Biology, van Gestel and colleagues reported an elegant example of how the spatial arrangement of differentiated cells gives rise to collective behavior in Bacillus subtilus colonies, further demonstrating the similarity of bacterial collectives to higher multicellular organisms.Introductory textbooks tend to depict bacteria as rather primitive and simple life forms: the billions of cells in a population are all supposedly performing the exact same processes, independent of each other. According to this perspective, the properties of the population are thus nothing more than the sum of the properties of the individual cells. A brief look at the recent literature shows that life at the micro scale is much more complex and far more interesting. Even though cells in a population share the same genetic material and are exposed to similar environmental signals, they are individuals: they can greatly differ from each other in their properties and behaviors [1,2].One source of such phenotypic variation is that individual cells experience different microenvironments and regulate their genes in response. However, and intriguingly, phenotypic differences can also arise in the absence of environmental variation [3]. The stochastic nature of biochemical reactions makes variation between individuals unavoidable: reaction rates in cells will fluctuate because of the typical small number of the molecules involved, leading to slight differences in the molecular composition between individual cells [4]. While cells cannot prevent fluctuations from occurring, the effect of these extracellular and intracellular perturbations on a cell’s phenotype can be controlled by changing the biochemical properties of molecules or the architecture of gene regulatory networks [4,5]. The degree of phenotypic variation could thus evolve in response to natural selection. This raises the question of whether the high degree of phenotypic variation observed in some traits could offer benefits to the bacteria [5].One potential benefit of phenotypic variation is bet hedging. Bet hedging refers to a situation in which a fraction of the cells express alternative programs, which typically reduce growth in the current conditions but at the same time allow for increased growth or survival when the environment abruptly changes [68]. Another potential benefit can arise through the division of labor: phenotypic variation can lead to the formation of interacting subpopulations that specialize in complementary tasks [9]. As a result, the population as a whole can perform existing functions more efficiently or attain new functionality [10]. Division of labor enables groups of bacteria to engage in two tasks that are incompatible with each other but that are both required to attain a certain biological function.One of the most famous examples of division of labor in bacteria is the specialization of multicellular cyanobacteria into photosynthesizing and nitrogen-fixing subpopulations [11]. Here, the driving force behind the division of labor is the biochemical incompatibility between photosynthesis and nitrogen fixation, as the oxygen produced during photosynthesis permanently damages the enzymes involved in nitrogen fixation [12]. Other examples include the division of labor between two subpopulations of Salmonella Typhimurium (Tm) during infections [9] and the formation of multicellular fruiting bodies in Myxococcus xanthus [13]. Division of labor is not restricted to interactions between only two subpopulations; for example, the soil-dwelling bacteria Bacillus subtilis can differentiate into at least five different cell types [14]. Multiple types can simultaneously be present in Bacillus biofilms and colonies, each contributing different essential tasks [14,15].An important question is whether a successful division of labor requires the different subpopulations to coordinate their behavior and spatial arrangement. For some systems, it turns out that spatial coordination is not required. For example, the division of labor in clonal groups of Salmonella Tm does not require that the two cell types are spatially arranged in a particular way [9]. In other systems, spatial coordination between the different cell types seems to be beneficial. For example, differentiation into nitrogen-fixing and photosynthetic cells in multicellular cyanobacteria is spatially regulated in a way that facilitates sharing of nitrogen and carbon [16]. In general, when cell differentiation is combined with coordination of behavior between cells, this can allow for the development of complex, group-level behaviors that cannot easily be deduced from the behavior of individual cells [1720]. In these cases, a population can no longer be treated as an assembly of independent individuals but must be seen as a union that together shows collective behavior.The study by van Gestel et al. [21] in a previous issue of PLOS Biology offers an exciting perspective on how collective behavior can arise from processes operating at the level of single cells. Van Gestel and colleagues [21] analyzed how groups of B. subtilis cells migrate across solid surfaces in a process known as sliding motility. The authors found that migration requires both individuality—the expression of different phenotypes in clonal populations—and spatial coordination between cells. Migration depends critically on the presence of two cell types: surfactin-producing cells, which excrete a surfactant that reduce surface tension, and matrix-producing cells, which excrete extracellular polysaccharides and proteins that form a connective extracellular matrix (Fig 1B) [14]. These two cell types are not randomly distributed across the bacterial group but are rather spatially organized (Fig 1C). The matrix-producing cells form bundles of interconnected and highly aligned cells, which the authors refer to as “van Gogh” bundles. The surfactant producing cells are not present in the van Gogh bundle but are essential for the formation of the bundles [21].Open in a separate windowFig 1Collective behavior through the spatial organization of differentiated cells.(A) Initially cells form a homogenous population. (B) Differentiation: cells start to differentiate into surfactin- (orange) and matrix- (blue) producing cells. The two cell types perform two complementary and essential tasks, resulting in a division of labor. (C) Spatial organization: the matrix-producing cells form van Gogh bundles, consisting of highly aligned and interconnected cells. Surfactin-producing cells are excluded from the bundles and have no particular spatial arrangement. (D) Collective behavior: growth of cells in the van Gogh bundles leads to buckling of these bundles, resulting in colony expansion. The buckling and resulting expansion depend critically on the presence of the two cell types and on their spatial arrangement.The ability to migrate is a collective behavior that can be linked to the biophysical properties of the multicellular van Gogh bundles. The growth of cells in these bundles causes them to buckle, which in turn drives colony migration (Fig 1D) [21]. This is a clear example of an emergent (group-level) phenotype: the buckling of the van Gogh bundles and the resulting colony motility cannot easily be deduced from properties of individual cells. Rather, to understand colony migration we have to understand the interactions between the two cells type as well as their spatial organization. Building on a rich body of work on the regulation of gene expression and cellular differentiation in Bacillus [14], van Gestel et al. [21] are able to show how these molecular mechanisms lead to the formation of specialized cell types that, through coordinated spatial arrangement, provide the group the ability to move (Fig 1). The study thus uniquely bridges the gap between molecular mechanisms and collective behavior in bacterial multicellularity.This study raises a number of intriguing questions. A first question pertains to the molecular mechanisms underlying the spatial coordination of the two cell types. Can the spatial organization be explained based on known mechanisms of the regulation of gene expression in this organism or does the formation of these patterns depend on hitherto uncharacterized gene regulation based on spatial gradients or cell–cell interaction? A second question is about the selective forces that lead to the evolution of collective migration of this organism. The authors raise the interesting hypothesis that van Gogh bundles evolved to allow for migration. Although this explanation is very plausible, it also raises the question of how selection acting on a property at the level of the group can lead to adaptation at the individual cell level. Possible mechanisms for such selective processes have been described within the framework of multilevel selection theory. However, there are still many questions regarding how, and to what extent, multilevel selection operates in the natural world [2224]. The system described by van Gestel and colleagues [21] offers exciting opportunities to address these questions using a highly studied and experimentally amenable model organism.Bacterial collectives (e.g., colonies or biofilms) have been likened to multicellular organisms partly because of the presence of cell differentiation and the importance of an extracellular matrix [25,26]. Higher multicellular organisms share these properties; however, they are more than simple lumps of interconnected, differentiated cells. Rather, the functioning of multicellular organisms critically depends on the precise spatial organization of these cells [27]. Even though spatial organization has been suggested before in B. subtilis biofilms [28], there was a gap in our understanding of how spatial organization of unicellular cells can lead to group-level function. The van Gogh bundles in the article by van Gestel et al. [21] provide direct evidence on how differentiated cells can spatially organize themselves to give rise to group-level behavior. This shows once more that bacteria are not primitive “bags of chemicals” but rather are more like us “multicellulars” than we might have expected.  相似文献   

17.
Olivia Oxlade and co-authors introduce a Collection on tuberculosis preventive therapy in people with HIV infection.

The most recent World Health Organization Global Tuberculosis (TB) Report suggests that 50% of people living with HIV (PLHIV) newly enrolled in HIV care initiated tuberculosis preventive treatment (TPT) in 2019 [1]. TPT is an essential intervention to prevent TB disease among people infected with Mycobacterium tuberculosis—some 25% of the world’s population [2]. Without TPT, it is estimated that up to 10% of individuals will progress to TB disease. Among PLHIV, the prognosis is worse. Of the approximately 1.4 million annual deaths from TB, 200,000 occur among PLHIV [1], who experience TB at rates more than 30 times [3] higher than people living without HIV.In 2018, governments at the United Nations High-Level Meeting (UNHLM) on TB committed to rapid expansion of testing for TB infection and provision of TPT [4]. The goal was the provision of TPT to at least 24 million household contacts of people with TB disease and 6 million PLHIV between 2018 and 2022. However, by the end of 2019, fewer than half a million household contacts had initiated TPT, well short of the pace needed to achieve the 5-year target [1]. On the other hand, approximately 5.3 million PLHIV have initiated TPT in the past 2 years [1], with particularly dramatic increases in countries supported by the President’s Emergency Plan for AIDS Relief (PEPFAR) [5]. Globally, among PLHIV entering HIV care programs, TPT initiation rose from 36% in 2017 to 49% in 2018 and 50% in 2019 [6,7].To provide insight into scaling up TPT for PLHIV, it is important to consider each of the many steps involved in the “cascade of care” for TPT. A previous systematic review of studies in several populations receiving TPT concluded that nearly 70% of all people who may benefit from TPT were lost to follow-up at cascade of care steps prior to treatment initiation [8]. To maximize the impact of TPT for TB prevention among PLHIV, the full TPT cascade of care must be assessed to identify problems and develop targeted solutions addressing barriers at each step. Until now, these data had not been synthesized for PLHIV.In order to address important research gaps related to TPT in PLHIV such as this one, we are now presenting a Collection in PLOS Medicine on TPT in PLHIV. In the first paper in this Collection, Bastos and colleagues performed a systematic review and meta-analysis of the TPT cascade of care in 71 cohorts with a total of 94,011 PLHIV [9]. This analysis highlights key steps in the cascade where substantial attrition occurs and identifies individual-level and programmatic barriers and facilitators at each step. In stratified analyses, they found that losses during the TPT cascade were not different in high-income compared to low- or middle-income settings, nor were losses greater in centers performing tests for TB infection (tuberculin skin test [TST] or interferon gamma release assay [IGRA]) prior to TPT initiation.The net benefits of TPT could potentially be increased through greater adoption of shorter rifamycin-based TPT regimens, for which there is increasing evidence of greater safety, improved treatment completion, and noninferior efficacy, compared to isoniazid regimens. Two reviews of rifamycin-based regimens in mostly HIV–negative adults and children concluded that they were as effective for prevention of TB as longer isoniazid-based regimens, with better treatment completion and fewer adverse events [10,11]. However, safety and tolerability of TPT regimens can differ substantially between people with and without HIV, and for rifamycin-based TPT regimens, safety outcomes were actually worse in people without HIV [12], plus there can be important drug–drug interactions between rifamycin-based regimens and antiretroviral drugs [13]. Reviews of studies focused on PLHIV concluded that TPT (regardless of regimen selected) significantly reduced TB incidence [14] and that the benefits of continuous isoniazid in high TB transmission settings outweighed the risks [15]. As part of this Collection, Yanes-Lane and colleagues conducted a systematic review and network meta-analysis of 16 randomized trials to directly and indirectly compare the risks and benefits of isoniazid and rifamycin-based TPT regimens among PLHIV [16]. Their findings highlight the better safety, improved completion, and evidence of efficacy, particularly reduced mortality, with rifamycin-based TPT regimens, while also noting improved TB prevention with extended duration mono-isoniazid regimens. Their review also revealed that few studies exist on some important at-risk populations, such was pregnant women and those with drug-resistant TB infection.In North America, recommendations changed in 2020 to favor shorter rifamycin-based regimens over isoniazid [17], but WHO still favors isoniazid [18], largely due to the lower drug costs. Although drug costs for rifamycins are typically higher than for isoniazid, their shorter duration and better safety profile mean that total costs for care (including personnel costs) may be lower for rifamycin-based regimens, even in underresourced settings [19]. The cost-effectiveness of different TPT regimens among PLHIV in underresourced settings remains uncertain, as well as the impact of antiretroviral therapy (ART), and the use of diagnostic tests for TB infection, such as TST or IGRA on cost efficiency. Uppal and colleagues, in the third paper in this Collection, performed a systematic review and meta-analysis of 61 published cost-effectiveness and transmission modeling studies of TPT among PLHIV [20]. In all studies, TPT was consistently cost-effective, if not cost saving, despite wide variation in key input parameters and settings considered.When comparing access to TPT among PLHIV to household contacts, many would consider the glass is half full, given that almost half of all PLHIV newly accessing care initiated TPT in 2018 and 2019, and the UNHLM goal of 6 million PLHIV initiating TPT was already nearly achieved by the end of 2020. This remarkable achievement is the result of strong recommendations from WHO for TPT among PLHIV for nearly a decade and strong donor support. These policies are, in turn, based on clear and consistent evidence of individual benefits from multiple randomized trials, plus consistent evidence of cost-effectiveness from many economic analyses as summarized in the papers in this Collection. These are useful lessons for scaling up TPT for other target populations, particularly household contacts, of whom less than half a million have initiated TPT, of the 24 million–person target set in 2018.However, the glass of TPT among PLHIV is also half empty. In contrast to the “90-90-90” targets, 50% of PLHIV newly enrolled in care do not initiate TPT, and PLHIV still bear a disproportionate burden of TB. Programmatic scale-up of TPT continues to encounter challenges that need to be overcome in order to translate individual-level success to population-level improvement. The study by Bastos and colleagues in this Collection has identified programmatic barriers including drug stockouts and suboptimal training for healthcare workers, but it also offers useful solutions, including integration of HIV and TPT services [9]. New evidence on the success of differentiated service delivery will also be invaluable to support programmatic scale-up in different settings [21]. Acting on this evidence will be essential to achieve the goal of full access to effective, safe, and cost-effective TPT for PLHIV.  相似文献   

18.

Background

There are few detailed etiologic studies of severe anemia in children from malaria-endemic areas and none in those countries with holoendemic transmission of multiple Plasmodium species.

Methodology/Principal Findings

We examined associates of severe anemia in 143 well-characterized Papua New Guinean (PNG) children aged 0.5–10 years with hemoglobin concentration <50 g/L (median [inter-quartile range] 39 [33][44] g/L) and 120 matched healthy children (113 [107–119] g/L) in a case-control cross-sectional study. A range of socio-demographic, behavioural, anthropometric, clinical and laboratory (including genetic) variables were incorporated in multivariate models with severe anemia as dependent variable. Consistent with a likely trophic effect of chloroquine or amodiaquine on parvovirus B19 (B19V) replication, B19V PCR/IgM positivity had the highest odds ratio (95% confidence interval) of 75.8 (15.4–526), followed by P. falciparum infection (19.4 (6.7–62.6)), vitamin A deficiency (13.5 (5.4–37.7)), body mass index-for-age z-score <2.0 (8.4 (2.7–27.0)) and incomplete vaccination (2.94 (1.3–7.2)). P. vivax infection was inversely associated (0.12 (0.02–0.47), reflecting early acquisition of immunity and/or a lack of reticulocytes for parasite invasion. After imputation of missing data, iron deficiency was a weak positive predictor (6.4% of population attributable risk).

Conclusions/Significance

These data show that severe anemia is multifactorial in PNG children, strongly associated with under-nutrition and certain common infections, and potentially preventable through vitamin A supplementation and improved nutrition, completion of vaccination schedules, and intermittent preventive antimalarial treatment using non-chloroquine/amodiaquine-based regimens.  相似文献   

19.
20.
In the aftermath of the Ebola crisis, the global health community has a unique opportunity to reflect on the lessons learned and apply them to prepare the world for the next crisis. Part of that preparation will entail knowing, with greater precision, what the scale and scope of our specific global health challenges are and what resources are needed to address them. However, how can we know the magnitude of the challenge, and what resources are needed without knowing the current status of the world through accurate primary data? Once we know the current status, how can we decide on an intervention today with a predicted impact decades out if we cannot project into that future? Making a case for more investments will require not just better data generation and sharing but a whole new level of sophistication in our analytical capability—a fundamental shift in our thinking to set expectations to match the reality. In this current status of a distributed world, being transparent with our assumptions and specific with the case for investing in global health is a powerful approach to finding solutions to the problems that have plagued us for centuries.When we have proactively set our sights on large and defined obstacles to human wellness, the global health community has been able to chart a course toward lasting, widespread impact. However, few would argue that the global health community’s response to Ebola—while ultimately effective—was the optimal way to anticipate and address a global health crisis. Comprehensive analyses have been conducted on what worked well and what didn’t [1]. Despite all the failings that led to over 11,000 deaths and an estimated US$1.6 billion in costs to the economy in Guinea, Sierra Leone, and Liberia, the global community did come together and help turn the tide against the epidemic—albeit more slowly than what could have been possible with a better-prepared world [2,3]. Major funding commitments were made when the reality and urgency of the epidemic became evident [4]. Another point that may not be widely known is that the private sector responded to the challenge by directing significant resources to develop vaccines, drugs, and diagnostics at an unprecedented pace. As a result, we now have four vaccine candidates, three therapeutics in Phase III clinical trials, and six diagnostics authorized for emergency use by WHO [5].At the turn of the millennium, the global community sought to address the far more complex problem of vaccination coverage. In 2000, the glaring disparity in vaccine access between wealthy and developing nations led to the formation of Gavi, the Vaccine Alliance [6]. After 15 years, Gavi has helped create a roadmap for countries to ramp up their immunization programs—reaching nearly half a billion additional children with vaccines [7]. Through its multisector partnership, Gavi not only addressed the huge challenge of improving childhood vaccination coverage, but it also provided certainty to the private sector, encouraging it to manufacture products for developing country markets and to make them affordable. In 2015, donors came together again and made US$7.5 billion in pledges, the largest ever financial commitment to support childhood immunization [8].We can find a comparable example in the sobering problem of tuberculosis (TB), in which the battle is being fought with a decades-old and unwieldy six-month treatment regimen of diminishing efficacy due to multidrug resistance. In 2013, TB made an estimated 9 million people sick, and 1.5 million people died from the disease [9]. However, the fight against TB is being reinvigorated. The TB Drug Accelerator (TBDA) is a groundbreaking partnership among eight pharmaceutical companies, seven research institutions, and a product development partnership funded by the Bill & Melinda Gates Foundation [10]. By driving collaboration and data sharing atypical of its partners, the TBDA’s overall goal is to create a new TB drug regimen that cures patients in only one month, replacing the outmoded intervention we have today. While the structure and purpose of the TBDA took rigorous iteration to get where it is today, the data and expertise shared among its partners has already identified compounds that could potentially lead to a more effective treatment.Although it might seem that the motivations behind the investments in these three cases are different—a potential regional or global health catastrophe in the case of Ebola, a humanitarian imperative underlying Gavi, and a dual global drug resistance threat and humanitarian basis for the TBDA—there is a common thread. These examples show that when there is imminent and clear need, we have been able to mobilize resources and construct creative partnerships to create an impact. Summers et al. make the compelling case for investing in global health by showing the general economic benefit of those investments [11]. Similarly, others have claimed a substantial return on investment in specific areas of health science as a motivation for future investments [12]. These cases are fairly general in their content, and we see modern investors in global health (whether countries or philanthropists) as being much more demanding in terms of wanting to know exactly how their resources are deployed and the impact that could be expected.At the Bill & Melinda Gates Foundation, we are exploring a set of approaches that start with our current (and improving) knowledge of the state of the burden of diseases relevant to low- and middle-income countries (LMICs) and comparing the potential interventions we have to reduce this burden along a number of dimensions. The ultimate objective is to arrive at a view of the actionable priorities that we can support at any one time in order to maximize our impact on health and wellbeing in communities with the highest burden. This approach has some parallels with portfolio analysis in the biopharmaceutical industry [13], but the very sparse and poor quality of the underlying primary data in global health means that, at best, we can rely on this as a rough guide and a mechanism for exposing outliers in cost, effectiveness, and impact (Fig 1). This approach provides a framework for comparison across diverse categories through a metric that is understandable. More importantly, it forces us to state our assumptions explicitly for debate and reconciliation. However, we also need to be cautious about any notion that the complex sociopolitical environments we work in and the fluctuating humanitarian crises that arise can ever be reduced to simple algorithms for decision-making.Open in a separate windowFig 1Portfolio analysis for global health impact.Cost per disability-adjusted life year (DALY) averted is the incremental cost to deliver incremental DALY savings versus only the standard of care. Probability of success is the estimate of probability of technical and regulatory success (PTRS) informed by industry benchmarks and expert opinion. NRRV: Non-replicating rotavirus vaccine. Both the cost and the probability of success are dynamic values and subject to change with information that is constantly evolving.Although our understanding of the burden of disease has improved tremendously at a national and subnational level for important pathogens, as evidenced by a recent integration of our knowledge of the spatial distribution of the risk of malaria in sub-Saharan Africa [14], we need to invest much more heavily in obtaining better primary data. We therefore recently launched CHAMPS, the Child Health and Mortality Prevention Surveillance Network [15]. CHAMPS will be a network of disease surveillance sites in LMICs that will help gather accurate data about how, where, and why children are getting sick and dying. For the first time in history, pathology-based surveillance will be used to track the causes of childhood mortality, complementing and improving upon existing cause-of-death information from verbal autopsy surveys and vital statistics. Through geospatial modeling and mapping, these new surveillance data will provide an increasingly broad and accurate picture to guide more effective use of the scarce resources for prevention and treatment.Improved data can also help drive progress against less familiar health challenges such as neglected tropical diseases (NTDs) [16]. Until recently, little was known about the geographical distribution of NTDs. Because of weak surveillance systems, the scarcity of geospatial mapping was greatest in sub-Saharan Africa, which has hampered deployment of effective programs. To address this, a WHO African Region-led effort has conducted thousands of field surveys using mobile phone data capture to complete the picture of NTDs across Africa. In addition to guiding disease control efforts, such as targeting of mass drug administrations only to places that need them, this mapping provides, for the first time, the necessary central database to allow analysis of program performance and to make projections of likely outcomes, including the probability of disease elimination.The framework we use to evaluate this data has a few simple dimensions: cost per disability-adjusted life year (DALY) averted, probability of technical success, and, at a more strategic level, whether our resources fill a real gap in the funding landscape (Fig 1). There are many alternative metrics, but we have chosen this scheme for its simplicity and augment it with additional analyses when these make sense. An important part of the framework related to work that might only be completed well into the future is understanding the spectrum of potential trajectories for future disease burden. Thus, forecasting becomes an essential element of decision-making; for this, we need to go beyond only linearly extrapolating future outcomes based on past trends. Much more sophisticated forecasting that integrates all significant covariates of the main outcome is becoming available [17] and will be increasingly useful for decision-making and the longitudinal evaluation of projects to assess whether interventions are shifting the envelope of outcomes in a positive direction.The system described above, which is already in use across parts of our global health portfolio, makes us optimistic that we can, in the near future, expand this approach to the entire portfolio and arrive at a more systematic way of understanding the inherent values and risks of a given intervention. This will also give us a clear picture of the huge and urgent problems in global health, paired with more deeply evaluated and cost-specific solutions, as well as a forecast of the negative consequences of inaction. Much as the world, or parts thereof, were mobilized by the Ebola crisis, the childhood vaccination gap, and the TB epidemic, we would then have maximized the likelihood of accessing new resources for potential solutions to ongoing global health crises.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号