首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We describe an unusual case of type 2 leprosy reaction (T2R) with septic shock–like features induced by helminth infection in a 31-year-old Moluccan male patient with a history of completed treatment of WHO multidrug therapy (MDT)–multibacillary (MB) regimen 2 years before admission. During the course of illness, the patient had numerous complications, including septic shock, anemia, and disseminated intravascular coagulation (DIC). Nevertheless, antibiotic therapies failed to give significant results, and the source of infection could not be identified. Helminth infection was subsequently revealed by endoscopic examination followed by parasitological culture. Resolution of symptoms and normal level of organ function–specific markers were resolved within 3 days following anthelmintic treatment. This report demonstrated the challenge in the diagnosis and treatment of severe T2R. Given that helminth infections may trigger severe T2R that mimics septic shock, health professionals need to be aware of this clinical presentation, especially in endemic regions of both diseases.

Type 2 leprosy reaction (T2R) is a type III hypersensitivity reaction that can occur in people with lepromatous or borderline lepromatous leprosy before, during, or after completion of multidrug therapy (MDT). Its clinical manifestations are highly variable, which can be limited to the skin or accompanied by systemic disruption [1,2]. Uncommonly, it may also present with fever, hypotension, and tachycardia that mimic septic shock [3]. Helminth infections have been demonstrated to modulate the host immune response and induce leprosy reaction [4]. While concurrent helminth infections may benefit true sepsis by preventing exaggerated inflammation and severe pathology [5], treating helminth coinfection contributed directly to the dramatic improvement of the patient’s clinical and laboratory outcomes in this report.  相似文献   

2.
Two articles published earlier this year in the International Journal of Epidemiology [1,2] have re-ignited the debate over the World Health Organization’s long-held recommendation of mass-treatment of intestinal helminths in endemic areas. In this note, we discuss the content and relevance of these articles to the policy debate, and review the broader research literature on the educational and economic impacts of deworming. We conclude that existing evidence still indicates that mass deworming is a cost-effective health investment for governments in low-income countries where worm infections are widespread.  相似文献   

3.
The Zika virus outbreak in the Americas has caused global concern. To help accelerate this fight against Zika, we launched the OpenZika project. OpenZika is an IBM World Community Grid Project that uses distributed computing on millions of computers and Android devices to run docking experiments, in order to dock tens of millions of drug-like compounds against crystal structures and homology models of Zika proteins (and other related flavivirus targets). This will enable the identification of new candidates that can then be tested in vitro, to advance the discovery and development of new antiviral drugs against the Zika virus. The docking data is being made openly accessible so that all members of the global research community can use it to further advance drug discovery studies against Zika and other related flaviviruses.The Zika virus (ZIKV) has emerged as a major public health threat to the Americas as of 2015 [1]. We have previously suggested that it represents an opportunity for scientific collaboration and open scientific exchange [2]. The health of future generations may very well depend on the decisions we make, our willingness to share our findings quickly, and open collaboration to rapidly find a cure for this disease. Since February 1, 2016, when the World Health Organization deemed the cluster of microcephaly cases, Guillain-Barré, and other neurological disorders associated with ZIKV in Latin America and the Caribbean as constituting a Public Health Emergency of International Concern [3] (PHEIC), we have seen a rapid increase in publications (S1 References and main references). We [2] and others [4,5] described steps that could be taken to initiate a drug discovery program on ZIKV. For example, computational approaches, such as virtual screening of chemical libraries or focused screening to repurpose FDA and/or EU-approved drugs, can be used to help accelerate the discovery of an anti-ZIKV drug. An antiviral drug discovery program can be initiated using structure-based design, based on homology models of the key ZIKV proteins. With the lack of structural information regarding the proteins of ZIKV, we built homology models for all the ZIKV proteins, based on close homologs such as dengue virus, using freely available software [6] (S1 Table). These were made available online on March 3, 2016. We also predicted the site of glycosylation of glycoprotein E as Asn154, which was recently experimentally verified [7].Since the end of March 2016, we have now seen two cryo-EM structures and 16 crystal structures of five target classes (S1 Table). These structures, alongside the homology models, represent potential starting points for docking-based virtual screening campaigns to help find molecules that are predicted to have high affinity with ZIKV proteins. These predictions can then be tested against the virus in cell-based assays and/or using individual protein-based assays. There are millions of molecules available that can be assayed, but which ones are likely to work, and how should we prioritize them?In March, we initiated a new open collaborative project called OpenZika (Fig 1), with IBM’s World Community Grid (WCG, worldcommunitygrid.org), which has been used previously for distributed computing projects (S2 Table). On May 18, 2016, the OpenZika project began the virtual screening of ~6 million compounds that are in the ZINC database (Fig 1), as well as the FDA-approved drugs and the NIH clinical collection, using AutoDock Vina and the homology models and crystal structures (S1 Table, S1 Text, S1 References), to discover novel candidate compounds that can potentially be developed into new drugs for treating ZIKV. These will be followed by additional virtual screens with a new ZINC library of ~38 million compounds, and the PubChem database (at most ~90 million compounds), after their structures are prepared for docking.Open in a separate windowFig 1Workflow for the OpenZika project.A. Docking input files of the targets and ligands are prepared, and positive control docking studies are performed. The crystallographic binding mode of a known inhibitor is shown as sticks with dark purple carbon atoms, while the docked binding mode against the NS5 target from HCV has cyan carbons. Our pdbqt files of the libraries of compounds we screen are also openly accessible (http://zinc.docking.org/pdbqt/). B. We have already prepared the docking input files for ~6 million compounds from ZINC (i.e., the libraries that ALP previously used in the GO Fight Against Malaria project on World Community Grid), which are currently being used in the initial set of virtual screens on OpenZika. C. IBM’s World Community Grid is an internet-distributed network of millions of computers (Mac, Windows, and Linux) and Android-based tablets or smartphones in over 80 countries. Over 715,000 volunteers donate their dormant computer time (that would otherwise be wasted) towards different projects that are both (a) run by an academic or nonprofit research institute, and (b) are devoted to benefiting humanity. D. OpenZika is harnessing World Community Grid to dock millions of commercially available compounds against multiple ZIKV homology models and crystal structures (and targets from related viruses) using AutoDock Vina (AD Vina). This ultimately produces candidates (virtual hits that produced the best docking scores and displayed the best interactions with the target during visual inspection) against individual proteins, which can then be prioritized for in vitro testing by collaborators. After it is inspected, all computational data against ZIKV targets will be made open to the public on our website (http://openzika.ufg.br/experiments/#tab-id-7), and OpenZika results are also available upon request. The computational and experimental data produced will be published as quickly as possible.Initially, compounds are being screened against the ZIKV homologs of drug targets that have been well-validated in research against dengue and hepatitis C viruses, such as NS5 and Glycoprotein E (S1 Table, S1 Text, S1 References). These may allow us to identify broad-spectrum antivirals against multiple flaviviruses, such as dengue virus, West Nile virus, and yellow fever virus. In addition, docking against the crystal structure of a related protein from a different pathogen can sometimes discover novel hits against the pathogen of interest [8].As well as applying docking-based filters, the compounds virtually screened on OpenZika will also be filtered using machine learning models (S1 Text, S1 References). These should be useful selection criteria for subsequent tests by our collaborators in whole-cell ZIKV assays, to verify their antiviral activity for blocking ZIKV infection or replication. Since all OpenZika docking data will be in the public domain soon after they are completed and verified, we and other labs can then advance the development of some of these new virtual candidates into experimentally validated hits, leads, and drugs through collaborations with wet labs.This exemplifies open science, which should help scientists around the world as they address the long and arduous process of discovering and developing new drugs. Screening millions of compounds against many different protein models in this way would take far more resources and time than any academic researcher could generally obtain or spend. As of August 16, 2016, we have submitted 894 million docking jobs. Over 6,934 CPU years have been donated to us, enabling over 439 million different docking jobs. We recently selected an initial batch of candidates for NS3 helicase (data openly available at http://openzika.ufg.br/experiments/#tab-id-7), for in vitro testing. Without the unique community of volunteers and tremendous resources provided by World Community Grid, this project would have been very difficult to initiate in a reasonable time frame at this scale.The OpenZika project will ultimately generate several billion docking results, which could make it the largest computational drug discovery project ever performed in academia. The potential challenges we foresee will be finding laboratories with sufficient funding to pursue compounds, synthesize analogs, and develop target-based assays to validate our predictions and generate SAR (Structure-Activity Relationship) data to guide the process of developing the new hits into leads and then drugs. Due to the difficult nature of drug discovery and the eventual evolution of drug resistance, funding of ZIKV research once initiated will likely need to be sustained for several years, if not longer (e.g., HIV research has been funded for decades). As with other WCG projects, once scientists identify experimentally validated leads, finding a company to license them and pursue them in clinical trials and beyond will need incentives such as the FDA Tropical Disease Priority voucher, [9] which has a financial value on the open market [10].By working together and opening our research to the scientific community, many other labs will also be able to take promising molecular candidates forward to accelerate progress towards defeating the ZIKV outbreak. We invite any interested researcher to join us (send us your models or volunteer to assay the candidates we identify through this effort against any of the flaviviruses), and we hope new volunteers in the general public will donate their dormant, spare computing cycles to this cause. We will ultimately report the full computational and experimental results of this collaboration.

Advantages and Disadvantages of OpenZika

Advantages
  • Open Science could accelerate the discovery of new antivirals using docking and virtual screening
  • Docking narrows down compounds to test, which saves time and money
  • Free to use distributed computing on World Community Grid, and the workflow is simpler than using conventional supercomputers
Disadvantages
  • Concern around intellectual property ownership and whether companies will develop drugs coming from effort
  • Need for experimental assays will always be a factor
  • Testing in vitro and in vivo is not free, nor are the samples of the compounds
  相似文献   

4.
Ribavirin is the only available Lassa fever treatment. The rationale for using ribavirin is based on one clinical study conducted in the early 1980s. However, reanalysis of previous unpublished data reveals that ribavirin may actually be harmful in some Lassa fever patients. An urgent reevaluation of ribavirin is therefore needed.

Fifty years after its discovery, Lassa fever remains uncontrolled, and mortality remains unacceptably high. Since 2015, Nigeria has been experiencing increasingly large outbreaks of Lassa fever, with new peaks reached in 2016, 2017, and 2018. In 1987, McCormick and colleagues reported a case fatality rate (CFR) of 16.5% among 441 patients hospitalized in Sierra Leone [1]. In Nigeria in 2019, 124 deaths were recorded among 554 laboratory-confirmed cases for a CFR of 22% [2].Ribavirin is the only available Lassa fever–specific treatment and has been used routinely for over 25 years. However, intravenous ribavirin is not licensed for Lassa fever. Its mechanism of action is unclear, it is expensive and hard to source, and it has well-known toxicities [3]. Therefore, the evidence for using ribavirin in Lassa fever deserves careful scrutiny. The emergence of potential new therapeutics for Lassa fever, such as favipiravir and monoclonal antibodies, adds further weight to the case for reconsidering the role of ribavirin since the evaluation of new drugs in clinical trials requires a comparison against existing treatments with a known efficacy and safety profile [4,5].The rationale for using ribavirin in Lassa fever is primarily based on one clinical study conducted in Sierra Leone in the late 1970s and early 1980s. McCormick and colleagues [6] reported that in Lassa fever patients with a serum aspartate aminotransferase (AST) level of ≥150 IU/L, the use of intravenous ribavirin within the first 6 days of illness reduced the fatality rate from 61% (11/18) with no ribavirin to 5% (1/20) (p = 0.002). These authors concluded that ribavirin is effective in the treatment of Lassa fever. However, there are long-standing concerns about the methods used in this study. Although randomization was used to assign patients to treatment groups, the comparisons presented were not according to original randomized groups, and we have reconstructed their derivation (Fig 1). Serious limitations to the comparisons presented include the use of historic controls, inclusion of pregnant women in the control group but their exclusion from the ribavirin group (case fatality is around 2-fold higher in pregnant women than nonpregnant patients), and post hoc merging of treatment groups. Despite this and the fact that the results only supported the use of ribavirin in nonpregnant adult patients with AST ≥150 IU/L, this study is the basis upon which ribavirin is now used in all patients with Lassa fever, including children, pregnant women, and people with normal liver function.Open in a separate windowFig 1Reconstruction of the McCormick et al. data.AST, aspartate aminotransferase; PW, pregnant women. † Discrepancy within McCormick et al, with 39 patients reported treated with oral ribavirin but only 38 (14+24) outcomes reported. ‡ Discrepancy within McCormick et al, with table 1 reporting 12/63 but text reporting 13/62.It has been well known among Lassa specialists that the McCormick study reports a subset of a much larger dataset assembled by the Lassa treatment unit in Sierra Leone and that a report on the full dataset was commissioned by the United States Army Medical Research and Development Command. One of us (PH) therefore submitted a freedom of information (FOI) request to access this report. The full report and an accompanying memo are available, and we encourage readers to access and read the materials [7,8]. The memo states that some of the original trial records were unavailable, and the data should be “interpreted with extreme caution.” Nonetheless, the report presents data from 1977 through to 1991 on 807 Lassa fever patients with a known outcome that were assigned to different ribavirin treatment regimens. These newly available data raise important questions about the safety and efficacy of ribavirin for the treatment of Lassa fever.The original data were lost during the civil war in Sierra Leone, but the report contains tables showing the distribution of characteristics of the whole population according to treatment group, an appendix showing individual data for the 405 patients who died, and results of a logistic regression analysis comparing the effect of ribavirin with no treatment for some of the ribavirin regimens, after adjusting for patient characteristics. Based on these data, we derived aggregated datasets containing the number of deaths according to treatment groups and individual characteristics. We combined groups I (“No treatment given”) and X (“Drugs were not available”) as no treatment and all groups in which ribavirin was administered (II, III, and V to IX) as ribavirin. Exhibit III-8 in the FOI report presented case fatality by treatment group and AST, from which we derived crude odds ratios (ORs) comparing ribavirin with no treatment. The logistic regression reported in Exhibit III-9 was restricted to “those treatment groups that yielded the lowest case fatality rates with respect to untreated patients in the high severity patient illness category” (groups II, III, V, and VII). It was adjusted for age, gender, time to admission, time to treatment, length of stay, and log(AST). We also reconstructed analyses by digitizing the data on individuals who died in Appendix D, calculating the number of deaths according to treatment group and AST, and subtracting these numbers from the totals presented in Exhibit III-2. These allowed us to estimate overall mortality ORs before and after adjusting for ribavirin, although the numbers did not entirely match, and so the number of deaths was reduced in some small groups.Estimates of the effect of oral and intravenous ribavirin from the McCormick study and of all ribavirin from the full report are shown in Fig 2. Based on the crude ORs derived from Exhibit III-8, ribavirin reduced mortality only in patients with serum AST ≥150 IU/L, with less benefit (OR 0.48 [95% CI 0.30 to 0.78]) than reported by McCormick and colleagues. However, ribavirin appeared to increase mortality in patients with serum AST <150 IU/L (2.90 [1.42 to 5.95]). In fact, in our analysis, the only stratum in which ribavirin appeared protective (0.38 [0.21 to 0.70]) was serum AST >300 IU/L (Table H in S1 Text). The logistic regression reported in the FOI report suggested a modest reduction in mortality, but the reasons for the choice of treatment groups compared were unclear. In the reconstructed analyses, ribavirin was associated with overall increased mortality (2.12 [1.67, 2.68]), although this was attenuated after adjustment for AST (1.48 [1.05, 2.08]).Open in a separate windowFig 2Forest plot of the OR of death in treatment and risk subgroups.AST, aspartate aminotransferase; FOI, freedom of information; OR, odds ratio.In our view, there is a compelling case to reevaluate the role of ribavirin in the care of patients with Lassa fever. The data suggest that ribavirin treatment may harm Lassa fever patients with AST <150 IU/L. The limitations revealed by the US Army report, such as large amounts of missing data, unclear treatment allocation practices, imbalances in treatment groups, and errors in coding serology results, cast further doubt on the conclusions of the McCormick study. This aligns with 2 recent systematic reviews by Eberhardt and colleagues and Cheng and colleagues, which concluded that the efficacy of ribavirin in Lassa fever was uncertain because of critical risk of bias in existing studies [9,10].Challenging a quarter of century of clinical practice is difficult. The first step is to acknowledge inadequacies in our knowledge and to ensure that treatment recommendations for Lassa fever better reflect the (weak) strength of evidence for ribavirin in different patient populations. Vigorous efforts should be made to engage clinicians and patients in designing a placebo-controlled trial to assess the safety and efficacy of ribavirin treatment in Lassa fever patients, particularly in those with milder disease (as may be indicated by an admission AST <150 IU/L) in whom the available evidence is compatible with ribavirin causing more harm than good.In conclusion, Lassa fever patients are receiving a drug that may lack efficacy or cause harm. It is incumbent on us to ensure that the next 25 years of Lassa fever treatment are built on more solid foundations.  相似文献   

5.
“Fit-for-purpose” diagnostic tests have emerged as a prerequisite to achieving global targets for the prevention, control, elimination, and eradication of neglected tropical diseases (NTDs), as highlighted by the World Health Organization’s (WHO) new roadmap. There is an urgent need for the development of new tools for those diseases for which no diagnostics currently exist and for improvement of existing diagnostics for the remaining diseases. Yet, efforts to achieve this, and other crosscutting ambitions, are fragmented, and the burden of these 20 debilitating diseases immense. Compounded by the Coronavirus Disease 2019 (COVID-19) pandemic, programmatic interruptions, systemic weaknesses, limited investment, and poor commercial viability undermine global efforts—with a lack of coordination between partners, leading to the duplication and potential waste of scant resources. Recognizing the pivotal role of diagnostic testing and the ambition of WHO, to move forward, we must create an ecosystem that prioritizes country-level action, collaboration, creativity, and commitment to new levels of visibility. Only then can we start to accelerate progress and make new gains that move the world closer to the end of NTDs.

Ahead of the second-ever World Neglected Tropical Disease (NTD) Day in January 2021, and amid the global Coronavirus Disease 2019 (COVID-19) crisis, the World Health Organization (WHO) launched a new roadmap for the prevention, control, elimination, and eradication of NTDs—a group of 20 diseases affecting more than one billion people worldwide [1]. Diagnostic testing is central to safeguarding decades of progress in NTDs and must be strategically leveraged to reach the goals laid out in the new NTD roadmap.Stepping back, we recognize the massive progress that has been made to combat NTDs. Today, 500 million fewer people need treatment for these debilitating diseases than in 2010, and 40 countries or areas have eliminated at least one of the 20 [1]. Yet, despite these gains, NTDs continue to impose a devastating human, social, and economic toll on the world’s poorest and most vulnerable communities [26]. COVID-19 is compounding the situation by wreaking havoc on health systems, which impacts progress on NTDs: this includes interruptions to mass treatment campaigns for diseases controlled through preventive chemotherapy (PCT) or individual case management interventions, as well as rerouting the already sparse available funding and resources [7].Diagnostic testing has been central to the COVID-19 response even with the introduction of vaccines. The rapid ramp up of research and development (R&D), the scaling up of low-cost and decentralized testing, and country-led approaches to tailored testing strategies for COVID-19, as well as lessons learned, can also provide new thinking around testing for NTDs. The new NTD roadmap offers a series of multisectoral actions and intensified, cross-cutting approaches to get us back on track—with diagnostics central to unlocking and accelerating this progress [1].However, the NTD roadmap shows that, of all 20 diseases or disease groups, just 2 (yaws and snakebite envenoming) are supported by adequate and accessible diagnostic tools. Six have no diagnostic tests available at all, with tools for each of the remaining conditions in urgent need of adaptation, modification, and/or improved accessibility (likely a more cost-effective option than the development of new diagnostics for these NTDs) [1]. This has to change. NTDs cannot continue to be neglected in favor of other competing priorities, or we risk losing the progress made to date.Until the COVID-19 pandemic thrust testing into the spotlight, diagnostics have been a “silent partner” in healthcare, receiving little by way of international attention and funding, specific country strategies, and dedicated budget lines. NTDs are no exception. Just 5% of the (limited) funding made available to NTDs has been invested in new diagnostics, compared with 44% and 39% on basic research and medicines and vaccines, respectively [1]. For most NTDs, diagnostics are a market failure situation, and as such, are not commercially viable enough to attract private investment. Consequently, very few diagnostic developers engage in this area—contrary, for example, to COVID-19, where developers are in the hundreds. Furthermore, as some diseases approach the last mile of elimination, falling infection rates precipitate the need for increasingly sensitive tests [1]. But progress in R&D is slow and fragmented, with a lack of engagement and coordination between governments, industry, donors, and development actors, leading to the duplication—and potential waste—of scant resources. While serial testing using multiple diagnostic tools or techniques can compensate for low sensitivity [8], such approaches are associated with increased costs of testing, sample collection, and transportation.Closing the diagnostic gap then, is a prerequisite to achieving the global ambition for NTDs, with the new NTD roadmap giving a blueprint for action. It is for this reason that we call on governments, industry, donors, and development actors to
  • Prioritize country-level diagnostic action: As we enter a new era in NTD management and control, we need to shift from traditional, donor-led models to country-driven initiatives. Government ministries must engage with, and advocate on behalf of, their poorest and most vulnerable populations so that no one is left behind. Political frameworks should prioritize diagnostics for NTDs in line with local disease burdens, and as part of fully funded, national health action plans that include a commitment to seeing the process through. Capacity building for diagnostics is also essential at country, sub-regional, and regional levels, including the establishment of laboratory networks, so that testing can be implemented in field settings.
  • Collaborate and create: There is never going to be a one-size-fits-all for NTD diagnostics. If targets are to be achieved, we need global frameworks that enable industry, manufacturers, and pharmaceutical companies to engage in the whole process, from R&D to supply chain logistics. Companies need to share knowledge, learnings, and innovation across multiple diseases. This will mean breaking silos and finding new ways to harness the power of existing products, technologies, and infrastructures. Further, it will mean creating economies of scale through regional manufacturing hubs and finding new, cross-cutting approaches to drive systemic change. To obtain the maximum access to technology and relevant intellectual property rights for NTD diagnostics, it is important to ensure that such rights are broadly available (non-exclusively) in NTD-endemic countries and are affordable (e.g., zero royalty rights).
  • Commit to new levels of visibility: The resources needed to realize that this ambition is limited, with a lack of visibility around the diagnostic landscape undermining progress in NTD management and control. Creating an ecosystem with visibility, transparency, and integration at its core will help streamline programmatic action, reduce the risk of duplication, and leverage the full potential from this limited pool. To do this, industry, donors, and other development actors must provide the information needed to map both funding and product landscapes. Using this information to create a virtual product pipeline will bring an unprecedented level of transparency to diagnostic developments—harmonizing multisectoral efforts and creating a robust information platform from which new collaborations, synergies, and innovation can grow. Developing an online open-access diagnostic pipeline for WHO NTD roadmap priority pathogens would serve multiple purposes: (i) drive advocacy to address critical product and funding gaps; and (ii) reduce the likelihood of duplication of efforts. Together, this would strengthen partnerships across all stakeholders, from donors to industry partners, to accelerate development, evaluation, and adoption of diagnostic solutions for NTDs. The newly established NTD Diagnostic Technical Advisory Group (DTAG) to WHO NTD department has already identified the priority diagnostic needs for NTD programs not only in terms of developing new tools, but also the accessibility of existing tools [9]. Several sub-groups that focus more narrowly on single diseases or specific topics (i.e., skin NTDs or cross-cutting) have been established and have been tasked to develop tool and biomarker agnostic target product profiles (TPPs), which are now available (for the most part) on WHO website for use by any diagnostic manufacturer to support development of their specific technology. Alignment with these diagnostic priorities by all stakeholders is strongly recommended to facilitate attainment of WHO 2030 NTD roadmap goals.
  • Establish NTD biobanks: Biobanks are required for the clinical evaluation and validation of new diagnostic tests. Establishing local biobanks would support a country-driven approach as well as allowing for head-to-head comparisons between tests and assessments of cross-reactivity across different NTDs.
  • Invest in existing diagnostics: The development of new diagnostics is a complex process, and the time from development to implementation can be lengthy. Training laboratory staff in the use of existing diagnostics and the establishment of robust quality control systems are effective approaches to achieving shorter-term improvements.
There is a long road ahead, but the past 10 years have shown us what can be achieved when governments, industry, donors, and development actors are bound by a shared, global goal. As we look forward to the next decade, we must prioritize country-level action, collaboration, creativity, and commitment to new levels of visibility, if we are to finally end the neglect of NTDs.  相似文献   

6.
Kate Causey and Jonathan F Mosser discuss what can be learnt from the observed impacts of the COVID-19 pandemic on routine immunisation systems.

In the final months of 2021, deaths due to the Coronavirus Disease 2019 (COVID-19) surpassed 5 million globally [1]. Available data suggest that even this staggering figure may be a substantial underestimate of the true toll of the pandemic [2]. Beyond mortality, it may take years to fully quantify the direct and indirect impacts of the COVID-19 pandemic such as disruptions in preventive care services. In an accompanying research study in PLOS Medicine, McQuaid and colleagues report on the uptake of routine childhood immunizations in 2020 in Scotland and England during major pandemic-related lockdowns [3]. This adds to a growing body of literature quantifying the impact of the COVID-19 pandemic on routine health services and childhood immunization [4,5], which provides important opportunities to learn from early pandemic experiences as immunization systems face ongoing challenges.McQuaid and colleagues compared weekly or monthly data on vaccine uptake in Scotland and England from January to December of 2020 to the rates observed in 2019 to estimate the changes in uptake before, during, and after COVID-19 pandemic lockdowns in each country. The authors included 2 different preschool immunizations, each with multiple doses. They found significantly increased uptake within 4 weeks of eligibility during the lockdown and postlockdown periods in Scotland for all 5 vaccine dose combinations examined: During lockdown, percentage point increases ranged from 1.3% to 14.3%. In England, there were significant declines in uptake during the prelockdown, lockdown, and postlockdown periods for all 4 vaccine dose combinations examined. However, declines during lockdown were small, with percentage point decreases ranging from −0.5% to −2.1%. Due to the nature of the data available, the authors were unable to account for possible seasonal variation in vaccine delivery, control for important individual-level confounders or effect modifiers such as child sex and parental educational attainment, or directly compare outcomes across the 2 countries.These findings stand in contrast to the documented experience of many other countries, where available data suggest historic disruptions in routine childhood vaccination coverage, particularly during the first months of pandemic-related lockdowns [5,6]. Supply side limitations such as delayed shipments of vaccines and supplies [7], inadequate personal protective equipment [8], staff shortages [9], and delayed or canceled campaigns and introductions [9] threatened vaccine delivery. Furthermore, fear of exposure to COVID-19 at vaccination centers [10], misinformation about vaccine safety [8], and lockdown-related limitations on travel to facilities [9,10] reduced demand. In polls of country experts conducted by WHO, UNICEF, and Gavi, the Vaccine Alliance throughout the second quarter of 2020, 126 of 170 countries reported at least some disruption to routine immunization programs [10,11]. Global estimates suggest that millions more children missed doses of important vaccines than would have in the absence of the COVID-19 pandemic [5,6]. While many vaccine programs showed remarkable resilience in the second half of 2020, with rates of vaccination returning to or even exceeding prepandemic levels [5,6], disruptions to immunization services persisted into 2021 in many countries [12].As the authors discuss, it is critical to pinpoint the specific program policies and strategies that contributed to increased uptake in Scotland and only small declines in England and, more broadly, to the rapid recovery of vaccination rates observed in many other countries. McQuaid and colleagues cite work suggesting that increased flexibility in parental working patterns during lockdowns, providing mobile services or public transport to vaccine centers, and sending phone- and mail-based reminders are strategies that may have improved uptake of timely vaccination in Scotland during this period [13]. Similarly, immunization programs around the world have employed a broad range of strategies to maintain or increase vaccination during the pandemic. Leaders in Senegal, Paraguay, and Sri Lanka designed and conducted media campaigns to emphasize the importance of childhood immunization even during lockdown [8,14,15]. Although many programs delayed mass campaigns in the spring of 2020, multiple countries were able to implement campaigns by the summer of 2020 [8,1620]. In each of these examples, leaders responded quickly to meet the unique challenges presented by the COVID-19 pandemic in their communities.Increased data collection and tracking systems are essential for efficient and effective responses as delivery programs face challenges. When concern arose for pandemic-related disruptions to immunization services, public health decision-makers in Scotland and England responded by increasing the frequency and level of detail in reports of vaccine uptake and by making these data available for planning and research. The potential for robust data systems to inform real-time decision-making is not limited to high-income countries. For instance, the Nigerian National Health Management Information System released an extensive online dashboard shortly after the onset of the pandemic, documenting the impact of COVID-19 on dozens of indicators of health service uptake, including 16 related to immunization [21]. Vaccination data systems that track individual children and doses, such as the reminder system in Scotland, allow for highly targeted responses. Similarly, in Senegal, Ghana, and in Karachi, Pakistan, healthcare workers have relied on existing or newly implemented tracking systems to identify children who have missed doses and provide text message and/or phone call reminders [8,22,23]. Investing in robust routine data systems allows for rapid scale-up of data collection, targeted services to those who miss doses, and a more informed response when vaccine delivery challenges arise.Policy and program decision-makers must learn from the observed impacts of the COVID-19 pandemic on health systems and vaccine delivery. The study by McQuaid and colleagues provides further evidence that vaccination programs in England and Scotland leveraged existing strengths and identified novel strategies to mitigate disruptions and deliver vaccines in the early stages of the pandemic. However, the challenges posed by the pandemic to routine immunization services continue. To mitigate the risk of outbreaks of measles and other vaccine-preventable diseases, strategies are needed to maintain and increase coverage, while ensuring that children who missed vaccines during the pandemic are quickly caught up. The accompanying research study provides important insights into 2 countries where services were preserved—and even increased—in the early pandemic. To meet present and future challenges, we must learn from early pandemic successes such as those in Scotland and England, tailor solutions to improve vaccine uptake, and strengthen data systems to support improved decision-making.  相似文献   

7.
Neurocysticercosis (NCC), the infection of the nervous system by the cystic larvae of Taenia solium, is a highly pleomorphic disease because of differences in the number and anatomical location of lesions, the viability of parasites, and the severity of the host immune response. Most patients with parenchymal brain NCC present with few lesions and a relatively benign clinical course, but massive forms of parenchymal NCC can carry a poor prognosis if not well recognized and inappropriately managed. We present the main presentations of massive parenchymal NCC and their differential characteristics.

Infection of the central nervous system by the larval stage of Taenia solium—the pork tapeworm—causes neurocysticercosis (NCC), a highly pleomorphic disease [1]. This pleomorphism is partly related to differences in the number and anatomical location of lesions, the viability of parasites, and the severity of the host immune response against the infection. Cysticerci may be located within the brain parenchyma, the subarachnoid space, the ventricular system, the spinal cord, the sellar region, or even the subdural space.Most patients with parenchymal NCC present with few lesions and a clinical course that is often more benign than that observed in the subarachnoid and ventricular forms of NCC, where a sizable proportion of patients are left with disabling sequelae or may even die as a result of the disease [2,3]. Nevertheless, massive forms of parenchymal NCC require special attention to reduce the risk of complications related to the disease itself or to an inadequate treatment. Here, we present the main presentations of massive parenchymal NCC and their differential characteristics. There is no standardized definition of how many cysts constitute massive NCC. While the term “massive” has usually been applied when there are more than 100 lesions in the brain parenchyma, others have used smaller numbers (50), and there is not a defined cutoff.  相似文献   

8.
9.
Olivia Oxlade and co-authors introduce a Collection on tuberculosis preventive therapy in people with HIV infection.

The most recent World Health Organization Global Tuberculosis (TB) Report suggests that 50% of people living with HIV (PLHIV) newly enrolled in HIV care initiated tuberculosis preventive treatment (TPT) in 2019 [1]. TPT is an essential intervention to prevent TB disease among people infected with Mycobacterium tuberculosis—some 25% of the world’s population [2]. Without TPT, it is estimated that up to 10% of individuals will progress to TB disease. Among PLHIV, the prognosis is worse. Of the approximately 1.4 million annual deaths from TB, 200,000 occur among PLHIV [1], who experience TB at rates more than 30 times [3] higher than people living without HIV.In 2018, governments at the United Nations High-Level Meeting (UNHLM) on TB committed to rapid expansion of testing for TB infection and provision of TPT [4]. The goal was the provision of TPT to at least 24 million household contacts of people with TB disease and 6 million PLHIV between 2018 and 2022. However, by the end of 2019, fewer than half a million household contacts had initiated TPT, well short of the pace needed to achieve the 5-year target [1]. On the other hand, approximately 5.3 million PLHIV have initiated TPT in the past 2 years [1], with particularly dramatic increases in countries supported by the President’s Emergency Plan for AIDS Relief (PEPFAR) [5]. Globally, among PLHIV entering HIV care programs, TPT initiation rose from 36% in 2017 to 49% in 2018 and 50% in 2019 [6,7].To provide insight into scaling up TPT for PLHIV, it is important to consider each of the many steps involved in the “cascade of care” for TPT. A previous systematic review of studies in several populations receiving TPT concluded that nearly 70% of all people who may benefit from TPT were lost to follow-up at cascade of care steps prior to treatment initiation [8]. To maximize the impact of TPT for TB prevention among PLHIV, the full TPT cascade of care must be assessed to identify problems and develop targeted solutions addressing barriers at each step. Until now, these data had not been synthesized for PLHIV.In order to address important research gaps related to TPT in PLHIV such as this one, we are now presenting a Collection in PLOS Medicine on TPT in PLHIV. In the first paper in this Collection, Bastos and colleagues performed a systematic review and meta-analysis of the TPT cascade of care in 71 cohorts with a total of 94,011 PLHIV [9]. This analysis highlights key steps in the cascade where substantial attrition occurs and identifies individual-level and programmatic barriers and facilitators at each step. In stratified analyses, they found that losses during the TPT cascade were not different in high-income compared to low- or middle-income settings, nor were losses greater in centers performing tests for TB infection (tuberculin skin test [TST] or interferon gamma release assay [IGRA]) prior to TPT initiation.The net benefits of TPT could potentially be increased through greater adoption of shorter rifamycin-based TPT regimens, for which there is increasing evidence of greater safety, improved treatment completion, and noninferior efficacy, compared to isoniazid regimens. Two reviews of rifamycin-based regimens in mostly HIV–negative adults and children concluded that they were as effective for prevention of TB as longer isoniazid-based regimens, with better treatment completion and fewer adverse events [10,11]. However, safety and tolerability of TPT regimens can differ substantially between people with and without HIV, and for rifamycin-based TPT regimens, safety outcomes were actually worse in people without HIV [12], plus there can be important drug–drug interactions between rifamycin-based regimens and antiretroviral drugs [13]. Reviews of studies focused on PLHIV concluded that TPT (regardless of regimen selected) significantly reduced TB incidence [14] and that the benefits of continuous isoniazid in high TB transmission settings outweighed the risks [15]. As part of this Collection, Yanes-Lane and colleagues conducted a systematic review and network meta-analysis of 16 randomized trials to directly and indirectly compare the risks and benefits of isoniazid and rifamycin-based TPT regimens among PLHIV [16]. Their findings highlight the better safety, improved completion, and evidence of efficacy, particularly reduced mortality, with rifamycin-based TPT regimens, while also noting improved TB prevention with extended duration mono-isoniazid regimens. Their review also revealed that few studies exist on some important at-risk populations, such was pregnant women and those with drug-resistant TB infection.In North America, recommendations changed in 2020 to favor shorter rifamycin-based regimens over isoniazid [17], but WHO still favors isoniazid [18], largely due to the lower drug costs. Although drug costs for rifamycins are typically higher than for isoniazid, their shorter duration and better safety profile mean that total costs for care (including personnel costs) may be lower for rifamycin-based regimens, even in underresourced settings [19]. The cost-effectiveness of different TPT regimens among PLHIV in underresourced settings remains uncertain, as well as the impact of antiretroviral therapy (ART), and the use of diagnostic tests for TB infection, such as TST or IGRA on cost efficiency. Uppal and colleagues, in the third paper in this Collection, performed a systematic review and meta-analysis of 61 published cost-effectiveness and transmission modeling studies of TPT among PLHIV [20]. In all studies, TPT was consistently cost-effective, if not cost saving, despite wide variation in key input parameters and settings considered.When comparing access to TPT among PLHIV to household contacts, many would consider the glass is half full, given that almost half of all PLHIV newly accessing care initiated TPT in 2018 and 2019, and the UNHLM goal of 6 million PLHIV initiating TPT was already nearly achieved by the end of 2020. This remarkable achievement is the result of strong recommendations from WHO for TPT among PLHIV for nearly a decade and strong donor support. These policies are, in turn, based on clear and consistent evidence of individual benefits from multiple randomized trials, plus consistent evidence of cost-effectiveness from many economic analyses as summarized in the papers in this Collection. These are useful lessons for scaling up TPT for other target populations, particularly household contacts, of whom less than half a million have initiated TPT, of the 24 million–person target set in 2018.However, the glass of TPT among PLHIV is also half empty. In contrast to the “90-90-90” targets, 50% of PLHIV newly enrolled in care do not initiate TPT, and PLHIV still bear a disproportionate burden of TB. Programmatic scale-up of TPT continues to encounter challenges that need to be overcome in order to translate individual-level success to population-level improvement. The study by Bastos and colleagues in this Collection has identified programmatic barriers including drug stockouts and suboptimal training for healthcare workers, but it also offers useful solutions, including integration of HIV and TPT services [9]. New evidence on the success of differentiated service delivery will also be invaluable to support programmatic scale-up in different settings [21]. Acting on this evidence will be essential to achieve the goal of full access to effective, safe, and cost-effective TPT for PLHIV.  相似文献   

10.
This Formal Comment provides clarifications on the authors’ recent estimates of global bacterial diversity and the current status of the field, and responds to a Formal Comment from John Wiens regarding their prior work.

We welcome Wiens’ efforts to estimate global animal-associated bacterial richness and thank him for highlighting points of confusion and potential caveats in our previous work on the topic [1]. We find Wiens’ ideas worthy of consideration, as most of them represent a step in the right direction, and we encourage lively scientific discourse for the advancement of knowledge. Time will ultimately reveal which estimates, and underlying assumptions, came closest to the true bacterial richness; we are excited and confident that this will happen in the near future thanks to rapidly increasing sequencing capabilities. Here, we provide some clarifications on our work, its relation to Wiens’ estimates, and the current status of the field.First, Wiens states that we excluded animal-associated bacterial species in our global estimates. However, thousands of animal-associated samples were included in our analysis, and this was clearly stated in our main text (second paragraph on page 3).Second, Wiens’ commentary focuses on “S1 Text” of our paper [1], which was rather peripheral, and, hence, in the Supporting information. S1 Text [1] critically evaluated the rationale underlying previous estimates of global bacterial operational taxonomic unit (OTU) richness by Larsen and colleagues [2], but the results of S1 Text [1] did not in any way flow into the analyses presented in our main article. Indeed, our estimates of global bacterial (and archaeal) richness, discussed in our main article, are based on 7 alternative well-established estimation methods founded on concrete statistical models, each developed specifically for richness estimates from multiple survey data. We applied these methods to >34,000 samples from >490 studies including from, but not restricted to, animal microbiomes, to arrive at our global estimates, independently of the discussion in S1 Text [1].Third, Wiens’ commentary can yield the impression that we proposed that there are only 40,100 animal-associated bacterial OTUs and that Cephalotes in particular only have 40 associated bacterial OTUs. However, these numbers, mentioned in our S1 Text [1], were not meant to be taken as proposed point estimates for animal-associated OTU richness, and we believe that this was clear from our text. Instead, these numbers were meant as examples to demonstrate how strongly the estimates of animal-associated bacterial richness by Larsen and colleagues [2] would decrease simply by (a) using better justified mathematical formulas, i.e., with the same input data as used by Larsen and colleagues [2] but founded on an actual statistical model; (b) accounting for even minor overlaps in the OTUs associated with different animal genera; and/or (c) using alternative animal diversity estimates published by others [3], rather than those proposed by Larsen and colleagues [2]. Specifically, regarding (b), Larsen and colleagues [2] (pages 233 and 259) performed pairwise host species comparisons within various insect genera (for example, within the Cephalotes) to estimate on average how many bacterial OTUs were unique to each host species, then multiplied that estimate with their estimated number of animal species to determine the global animal-associated bacterial richness. However, since their pairwise host species comparisons were restricted to congeneric species, their estimated number of unique OTUs per host species does not account for potential overlaps between different host genera. Indeed, even if an OTU is only found “in one” Cephalotes species, it might not be truly unique to that host species if it is also present in members of other host genera. To clarify, we did not claim that all animal genera can share bacterial OTUs, but instead considered the implications of some average microbiome overlap (some animal genera might share no bacteria, and other genera might share a lot). The average microbiome overlap of 0.1% (when clustering bacterial 16S sequences into OTUs at 97% similarity) between animal genera used in our illustrative example in S1 Text [1] is of course speculative, but it is not unreasonable (see our next point). A zero overlap (implicitly assumed by Larsen and colleagues [2]) is almost certainly wrong. One goal of our S1 Text [1] was to point out the dramatic effects of such overlaps on animal-associated bacterial richness estimates using “basic” mathematical arguments.Fourth, Wiens’ commentary could yield the impression that existing data are able to tell us with sufficient certainty when a bacterial OTU is “unique” to a specific animal taxon. However, so far, the microbiomes of only a minuscule fraction of animal species have been surveyed. One can thus certainly not exclude the possibility that many bacterial OTUs currently thought to be “unique” to a certain animal taxon are eventually also found in other (potentially distantly related) animal taxa, for example, due to similar host diets and or environmental conditions [47]. As a case in point, many bacteria in herbivorous fish guts were found to be closely related to bacteria in mammals [8], and Song and colleagues [6] report that bat microbiomes closely resemble those of birds. The gut microbiome of caterpillars consists mostly of dietary and environmental bacteria and is not species specific [4]. Even in animal taxa with characteristic microbiota, there is a documented overlap across host species and genera. For example, there are a small number of bacteria consistently and specifically associated with bees, but these are found across bee genera at the level of the 99.5% similar 16S rRNA OTUs [5]. To further illustrate that an average microbiome overlap between animal taxa at least as large as the one considered in our S1 Text (0.1%) [1] is not unreasonable, we analyzed 16S rRNA sequences from the Earth Microbiome Project [6,9] and measured the overlap of microbiota originating from individuals of different animal taxa. We found that, on average, 2 individuals from different host classes (e.g., 1 mammalian and 1 avian sample) share 1.26% of their OTUs (16S clustered at 100% similarity), and 2 individuals from different host genera belonging to the same class (e.g., 2 mammalian samples) share 2.84% of their OTUs (methods in S1 Text of this response). A coarser OTU threshold (e.g., 97% similarity, considered in our original paper [1]) would further increase these average overlaps. While less is known about insect microbiomes, there is currently little reason to expect a drastically different picture there, and, as explained in our S1 Text [1], even a small average microbiome overlap of 0.1% between host genera would strongly limit total bacterial richness estimates. The fact that the accumulation curve of detected bacterial OTUs over sampled insect species does not yet strongly level off says little about where the accumulation curve would asymptotically converge; rigorous statistical methods, such as the ones used for our global estimates [1], would be needed to estimate this asymptote.Lastly, we stress that while the present conversation (including previous estimates by Louca and colleagues [1], Larsen and colleagues [2], Locey and colleagues [10], Wiens’ commentary, and this response) focuses on 16S rRNA OTUs, it may well be that at finer phylogenetic resolutions, e.g., at bacterial strain level, host specificity and bacterial richness are substantially higher. In particular, future whole-genome sequencing surveys may well reveal the existence of far more genomic clusters and ecotypes than 16S-based OTUs.  相似文献   

11.
Humans help each other. This fundamental feature of homo sapiens has been one of the most powerful forces sculpting the advent of modern civilizations. But what determines whether humans choose to help one another? Across 3 replicating studies, here, we demonstrate that sleep loss represents one previously unrecognized factor dictating whether humans choose to help each other, observed at 3 different scales (within individuals, across individuals, and across societies). First, at an individual level, 1 night of sleep loss triggers the withdrawal of help from one individual to another. Moreover, fMRI findings revealed that the withdrawal of human helping is associated with deactivation of key nodes within the social cognition brain network that facilitates prosociality. Second, at a group level, ecological night-to-night reductions in sleep across several nights predict corresponding next-day reductions in the choice to help others during day-to-day interactions. Third, at a large-scale national level, we demonstrate that 1 h of lost sleep opportunity, inflicted by the transition to Daylight Saving Time, reduces real-world altruistic helping through the act of donation giving, established through the analysis of over 3 million charitable donations. Therefore, inadequate sleep represents a significant influential force determining whether humans choose to help one another, observable across micro- and macroscopic levels of civilized interaction. The implications of this effect may be non-trivial when considering the essentiality of human helping in the maintenance of cooperative, civil society, combined with the reported decline in sufficient sleep in many first-world nations.

Helping behavior between humans has been one of the most influential forces sculpting modern civilizations, but what factors influence this propensity to help? This study demonstrates that a lack of sleep dictates whether humans choose to help each other at three different scales: within individuals, across individuals, and across societies.

Service to others is the rent you pay for your room here on earth.”― Muhammad Ali
Humans help each other. Helping is a prominent feature of homo sapiens [1], and represents a fundamental force sculpting the advent and preservation of modern civilizations [2,3].The ubiquity of helping is evident across the full spectrum of societal strata. From global government-to-government aid packages (e.g., the international aid following the 2004 Indian Ocean tsunami [4]), to country-wide pledge drives (e.g., the 2010 Haiti disaster) [5], and to individuals altruistically gifting money or donating their own blood to strangers, the expression of helping is abundant and pervasive [6]. So much so that this fundamental act has scaled into a lucent and sizable “helping economy” [7], with charitable giving in the United States amounting to $450 billion in 2019; a value representing 5.5% of the gross domestic product. In the United Kingdom, 10 billion pounds were donated to charity in 2017 and 2018. Indeed, more than 50% of individuals across the US, Europe, and Asia will have reported donating to charity or helping a stranger within the past month (The World Giving index).Human helping is therefore globally abundant, common across diverse societies, sizable in scope, substantive in financial magnitude, consequential in ramification, and frequent in occurrence.The motivated drive for humans to help each other has been linked to a range of underlying factors, from evolutionary forces (e.g., kin selection and reciprocal altruism that bias helping toward close others [2]), cultural norms and expectations (e.g., individualistic versus collectivistic cultures [8,9]), to socioeconomic factors (e.g., helping is less common in larger cities relative to rural areas [10,11]), as well as personality traits (e.g., individual empathy) [12,13].Ultimately, however, the decisional act to help others involves the human brain. Prosocial helping of varied kinds consistently engages a set of brain regions known as the social cognition network. Comprised of the medial prefrontal cortex (mPFC), mid and superior temporal sulcus, temporal-parietal junction (TPJ), and the precuneus [14,15], this network is activated when considering the mental states, needs, and perspectives of others [1619], and the active choice to help them [2023]. In contrast, lesions within key regions of this network result in “acquired sociopathy” [24], associated with a loss of both empathy and the withdrawal of compassionate helping [2527].Yet the possibility that sleep loss represents another significant factor determining whether or not humans help each other, linked to underlying impairments within the social cognition brain network, remains unknown. Several lines of evidence motivate this prediction. First, insufficient sleep impairs emotional processing, including deficits in emotion recognition and expression, while conversely increasing basic emotional reactivity, further linked to antisocial behavior [28,29] (such as increased interpersonal conflict [30] and reduced trust in others [31,32]). Second, sleep loss reliably decreases activity in, and disrupts functional connectivity between, numerous regions within the social cognition brain network [33], including the mPFC [34], TPJ, and precuneus [35].Building on this overarching hypothesis, here, we test the prediction that a lack of sleep impairs human helping at a neural, individual, group, and global societal level. More specifically, we tested whether: (i) within individuals, a night of experimental sleep loss decreases the fundamental desire to help others, the underlying neural mechanism of which is linked to impaired activity within the social cognition brain network when considering other individuals (Study 1), (ii) in a micro-longitudinal study, night-to-night fluctuations in sleep result in a corresponding next-day deficit in the desire to act altruistically and helping others (Study 2), and (iii) at a large-scale national level, the loss of 1 h of sleep opportunity, using the manipulation of daylight saving time (DST), impairs the real-world behavioral act of altruistic human helping at a large-scale, societal level (Study 3).  相似文献   

12.
Coral reefs on remote islands and atolls are less exposed to direct human stressors but are becoming increasingly vulnerable because of their development for geopolitical and military purposes. Here we document dredging and filling activities by countries in the South China Sea, where building new islands and channels on atolls is leading to considerable losses of, and perhaps irreversible damages to, unique coral reef ecosystems. Preventing similar damage across other reefs in the region necessitates the urgent development of cooperative management of disputed territories in the South China Sea. We suggest using the Antarctic Treaty as a positive precedent for such international cooperation.Coral reefs constitute one of the most diverse, socioeconomically important, and threatened ecosystems in the world [13]. Coral reefs harbor thousands of species [4] and provide food and livelihoods for millions of people while safeguarding coastal populations from extreme weather disturbances [2,3]. Unfortunately, the world’s coral reefs are rapidly degrading [13], with ~19% of the total coral reef area effectively lost [3] and 60% to 75% under direct human pressures [3,5,6]. Climate change aside, this decline has been attributed to threats emerging from widespread human expansion in coastal areas, which has facilitated exploitation of local resources, assisted colonization by invasive species, and led to the loss and degradation of habitats directly and indirectly through fishing and runoff from agriculture and sewage systems [13,57]. In efforts to protect the world’s coral reefs, remote islands and atolls are often seen as reefs of “hope,” as their isolation and uninhabitability provide de facto protection against direct human stressors, and may help impacted reefs through replenishment [5,6]. Such isolated reefs may, however, still be vulnerable because of their geopolitical and military importance (e.g., allowing expansion of exclusive economic zones and providing strategic bases for military operations). Here we document patterns of reclamation (here defined as creating new land by filling submerged areas) of atolls in the South China Sea, which have resulted in considerable loss of coral reefs. We show that conditions are ripe for reclamation of more atolls, highlighting the need for international cooperation in the protection of these atolls before more unique and ecologically important biological assets are damaged, potentially irreversibly so.Studies of past reclamations and reef dredging activities have shown that these operations are highly deleterious to coral reefs [8,9]. First, reef dredging affects large parts of the surrounding reef, not just the dredged areas themselves. For example, 440 ha of reef was completely destroyed by dredging on Johnston Island (United States) in the 1960s, but over 2,800 ha of nearby reefs were also affected [10]. Similarly, at Hay Point (Australia) in 2006 there was a loss of coral cover up to 6 km away from dredging operations [11]. Second, recovery from the direct and indirect effects of dredging is slow at best and nonexistent at worst. In 1939, 29% of the reefs in Kaneohe Bay (United States) were removed by dredging, and none of the patch reefs that were dredged had completely recovered 30 years later [12]. In Castle Harbour (Bermuda), reclamation to build an airfield in the early 1940s led to limited coral recolonization and large quantities of resuspended sediments even 32 years after reclamation [13]; several fish species are claimed extinct as a result of this dredging [14,15]. Such examples and others led Hatcher et al. [8] to conclude that dredging and land clearing, as well as the associated sedimentation, are possibly the most permanent of anthropogenic impacts on coral reefs.The impacts of dredging for the Spratly Islands are of particular concern because the geographical position of these atolls favors connectivity via stepping stones for reefs over the region [1619] and because their high biodiversity works as insurance for many species. In an extensive review of the sparse and limited data available for the region, Hughes et al. [20] showed that reefs on offshore atolls in the South China Sea were overall in better condition than near-shore reefs. For instance, by 2004 they reported average coral covers of 64% for the Spratly Islands and 68% for the Paracel Islands. By comparison, coral reefs across the Indo-Pacific region in 2004 had average coral covers below 25% [21]. Reefs on isolated atolls can still be prone to extensive bleaching and mortality due to global climate change [22] and, in the particular case of atolls in the South China Sea, the use of explosives and cyanine [20]. However, the potential for recovery of isolated reefs to such stressors is remarkable. Hughes et al. [20] documented, for instance, how coral cover in several offshore reefs in the region declined from above 80% in the early 1990s to below 6% by 1998 to 2001 (due to a mixture of El Niño and damaging fishing methods that make use of cyanine and explosives) but then recovered to 30% on most reefs and up to 78% in some reefs by 2004–2008. Another important attribute of atolls in the South China Sea is the great diversity of species. Over 6,500 marine species are recorded for these atolls [23], including some 571 reef coral species [24] (more than half of the world’s known species of reef-building corals). The relatively better health and high diversity of coral reefs in atolls over the South China Sea highlights the uniqueness of such reefs and the important roles they may play for reefs throughout the entire region. Furthermore, these atolls are safe harbor for some of the last viable populations of highly threatened species (e.g., Bumphead Parrotfish [Bolbometopon muricatum] and several species of sawfishes [Pristis, Anoxypristis]), highlighting how dredging in the South China Sea may threaten not only species with extinction but also the commitment by countries in the region to biodiversity conservation goals such as the Convention of Biological Diversity Aichi Targets and the United Nations Sustainable Development Goals.Recently available remote sensing data (i.e., Landsat 8 Operational Land Imager and Thermal Infrared Sensors Terrain Corrected images) allow quantification of the sharp contrast between the gain of land and the loss of coral reefs resulting from reclamation in the Spratly Islands (Fig 1). For seven atolls recently reclaimed by China in the Spratly Islands (names provided in Fig 1D, S1 Data for details); the area of reclamation is the size of visible areas in Landsat band 6, as prior to reclamation most of the atolls were submerged, with the exception of small areas occupied by a handful of buildings on piers (note that the amount of land area was near zero at the start of the reclamation; Fig 1C, S1 Data). The seven reclaimed atolls have effectively lost ~11.6 km2 (26.9%) of their reef area for a gain of ~10.7 km2 of land (i.e., >75 times increase in land area) from February 2014 to May 2015 (Fig 1C). The area of land gained was smaller than the area of reef lost because reefs were lost not only through land reclamation but also through the deepening of reef lagoons to allow boat access (Fig 1B). Similar quantification of reclamation by other countries in the South China Sea (Fig 1Reclamation leads to gains of land in return for losses of coral reefs: A case example of China’s recent reclamation in the Spratly Islands.Table 1List of reclaimed atolls in the Spratly Islands and the Paracel Islands.The impacts of reclamation on coral reefs are likely more severe than simple changes in area, as reclamation is being achieved by means of suction dredging (i.e., cutting and sucking materials from the seafloor and pumping them over land). With this method, reefs are ecologically degraded and denuded of their structural complexity. Dredging and pumping also disturbs the seafloor and can cause runoff from reclaimed land, which generates large clouds of suspended sediment [11] that can lead to coral mortality by overwhelming the corals’ capacity to remove sediments and leave corals susceptible to lesions and diseases [7,9,25]. The highly abrasive coralline sands in flowing water can scour away living tissue on a myriad of species and bury many organisms beyond their recovery limits [26]. Such sedimentation also prevents new coral larvae from settling in and around the dredged areas, which is one of the main reasons why dredged areas show no signs of recovery even decades after the initial dredging operations [9,12,13]. Furthermore, degradation of wave-breaking reef crests, which make reclamation in these areas feasible, will result in a further reduction of coral reefs’ ability to (1) self-repair and protect against wave abrasion [27,28] (especially in a region characterized by typhoons) and (2) keep up with rising sea levels over the next several decades [29]. This suggests that the new islands would require periodic dredging and filling, that these reefs may face chronic distress and long-term ecological damage, and that reclamation may prove economically expensive and impractical.The potential for land reclamation on other atolls in the Spratly Islands is high, which necessitates the urgent development of cooperative management of disputed territories in the South China Sea. First, the Spratly Islands are rich in atolls with similar characteristics to those already reclaimed (Fig 1D); second, there are calls for rapid development of disputed territories to gain access to resources and increase sovereignty and military strength [30]; and third, all countries with claims in the Spratly Islands have performed reclamation in this archipelago (20]. One such possibility is the generation of a multinational marine protected area [16,17]. Such a marine protected area could safeguard an area of high biodiversity and importance to genetic connectivity in the Pacific, in addition to promoting peace in the region (extended justification provided by McManus [16,17]). A positive precedent for the creation of this protected area is that of Antarctica, which was also subject to numerous overlapping claims and where a recently renewed treaty froze national claims, preventing large-scale ecological damage while providing environmental protection and areas for scientific study. Development of such a legal framework for the management of the Spratly Islands could prevent conflict, promote functional ecosystems, and potentially result in larger gains (through spillover, e.g. [31]) for all countries involved.  相似文献   

13.
Several issues have been identified with the current programs for the elimination of onchocerciasis that target only transmission by using mass drug administration (MDA) of the drug ivermectin. Alternative and/or complementary treatment regimens as part of a more comprehensive strategy to eliminate onchocerciasis are needed. We posit that the addition of “prophylactic” drugs or therapeutic drugs that can be utilized in a prophylactic strategy to the toolbox of present microfilaricidal drugs and/or future macrofilaricidal treatment regimens will not only improve the chances of meeting the elimination goals but may hasten the time to elimination and also will support achieving a sustained elimination of onchocerciasis. These “prophylactic” drugs will target the infective third- (L3) and fourth-stage (L4) larvae of Onchocerca volvulus and consequently prevent the establishment of new infections not only in uninfected individuals but also in already infected individuals and thus reduce the overall adult worm burden and transmission. Importantly, an effective prophylactic treatment regimen can utilize drugs that are already part of the onchocerciasis elimination program (ivermectin), those being considered for MDA (moxidectin), and/or the potential macrofilaricidal drugs (oxfendazole and emodepside) currently under clinical development. Prophylaxis of onchocerciasis is not a new concept. We present new data showing that these drugs can inhibit L3 molting and/or inhibit motility of L4 at IC50 and IC90 that are covered by the concentration of these drugs in plasma based on the corresponding pharmacological profiles obtained in human clinical trials when these drugs were tested using various doses for the therapeutic treatments of various helminth infections.

Onchocerca volvulus is an obligate human parasite and the causative agent for onchocerciasis, which is a chronic neglected tropical disease prevalent mostly in the sub-Saharan Africa. In 2017, 20.9 million people were infected, with 14.6 million having skin pathologies and 1.15 million having vision loss [1]. The socioeconomic impact of onchocerciasis and the debilitating morbidity caused by the disease prompted the World Health Organization (WHO) to initiate control programs that were first focused on reducing onchocerciasis as a public health problem, and since 2012, the ultimate goal is to eliminate it by 2030 [2]. Over the years, WHO sponsored and coordinated 3 major programs: The Onchocerciasis Control Programme (OCP), the African Programme for Onchocerciasis Control (APOC), and the Onchocerciasis Elimination Program of the Americas (OEPA). Since 1989, the control measures depended on mass drug administration (MDA) annually or biannually with ivermectin, which targets the transmitting stage of parasite, the microfilariae [35]. However, several issues have been identified with the current MDA programs including the need to expand the treatment to more populations depending on baseline endemicity and transmission rates [2,6]. Moreover, it became apparent that alternative and/or complementary treatment regimens as part of a more comprehensive strategy to eliminate onchocerciasis are needed [2]. Ivermectin has only mild to moderate effects on the adult stages of the parasite [79], and there are communities in Africa where the effects of ivermectin are suboptimal [10]. It is also contraindicated in areas of Loa loa co-endemicity [11], as well as in children under the age of 5 and in pregnant women. By relying only on MDA with ivermectin, the most optimistic mathematical modeling predicts that elimination will occur only in 2045 [12].To support the elimination agenda, much of the recent focus has been on improving efficacy outcomes through improved microfilariae control with moxidectin and the discovery of macrofilaricidal drugs that target the adult O. volvulus parasites [1318]. We posit that the addition of “prophylactic” drugs or therapeutic drugs that can be utilized in a prophylactic strategy to the toolbox of present microfilaricidal drugs and/or future macrofilaricidal treatment regimens will not only improve the chances of meeting the elimination goals but may also hasten the time for elimination and support achieving a sustained elimination of onchocerciasis. These “prophylactic” drugs will target the infective third- (L3) and fourth-stage (L4) larvae of O. volvulus and consequently prevent the establishment of new infections not only in the uninfected individuals but also in the already infected individuals and thus reduce the overall adult worm burden and transmission. Importantly, an effective prophylactic treatment regimen can utilize drugs that are already part of the onchocerciasis elimination program (ivermectin), those being considered for MDA (moxidectin) [19,20], and/or the potential macrofilaricidal drugs (oxfendazole and emodepside) currently under clinical development [21].Prophylaxis of onchocerciasis is not a new concept. In the 1980s, once ivermectin was introduced as a “prophylactic” drug against the filarial dog heartworm, Dirofilaria immitis [22], its prophylactic effects were also examined in Onchocerca spp. In chimpanzees, a single dose of ivermectin (200 μg/kg) was highly protective (83% reduction in patent infections) when given at the time of the experimental infection and tracked for development of patency over 30 months. It was, however, much less effective (33% reduction in patent infections) when given 1 month postinfection with the L3s, at which time the L4s had already developed [23]. Moreover, monthly treatment with ivermectin at either 200 μg/kg or 500 μg/kg for 21 months completely protected naïve calves against the development of O. ochengi infection as compared to untreated controls, which were 83% positive for nodules and 100% positive for patency [24]. When naïve calves exposed to natural infection were treated with either ivermectin (150 μg/kg) or with moxidectin (200 μg/kg) monthly or quarterly, none of the animals developed detectable infections after 22 months of exposure, except 2 animals in the quarterly ivermectin treated group which had 1 nodule each; in the non-treated control group, the nodule prevalence was 78.6% [25]. These prophylactic studies in calves exposed to natural infections clearly demonstrated that monthly or quarterly treatments with ivermectin and/or moxidectin over 22 months were highly efficacious against the development of new infections. When ivermectin was administered in a highly endemic region of onchocerciasis in Cameroon every 3 months over a 4-year period, it resulted in reduced numbers of new nodules (17.7%) when compared to individuals who were treated annually. This recent study suggests that ivermectin may have also a better prophylactic effect in humans when administered quarterly [26].Importantly, moxidectin, a member of the macrocyclic lactone family of anthelmintic drugs, also used in veterinary medicine like ivermectin [20], was recently approved for the treatment of onchocerciasis as a microfilaricidal drug in individuals over the age of 12 [20]. In humans, a single dose of moxidectin (8 mg) appeared to be more efficacious than a single dose of ivermectin (150 μg/kg) in terms of lowering microfilarial loads [17]. Modeling has shown that an annual treatment with moxidectin and a biannual treatment with ivermectin would achieve similar reductions in the duration of the MDA programs when compared to an annual treatment with ivermectin [27].In our efforts to identify macrofilaricidal drugs, we tested a selection of drugs for their ability to inhibit the molting of O. volvulus L3 to L4 as part of the in vitro drug screening funnel [13,2831]. With some being highly effective, we decided to also examine the effects of the known MDA drugs and those already in clinical development for macrofilaricidal effects on molting of L3 and the motility of L4 (S1 Text) as potential “prophylactic” drugs. When ivermectin and moxidectin were evaluated, we found that both drugs were highly effective as inhibitors of molting: IC50 of 1.048 μM [918.86 ng/ml] and IC90 of 3.73 μM [2,949.1 ng/ml] for ivermectin and IC50 of 0.654 μM [418.43 ng/ml] and IC90 of 1.535 μM [985.3 ng/ml] for moxidectin (Table 1 and S1 Fig), with moxidectin being more effective than ivermectin. When both drugs were tested against the L4, we found that both drugs inhibited the motility of L4s after 6 days of treatment: Ivermectin had an IC50 of 1.38 μM [1,207.6 ng/ml] and IC90 of 31.45 μM [27,521.9 ng/ml] (Table 1 and S1 Fig), while moxidectin had an IC50 of 1.039 μM [665.4 ng/ml] and IC90 of approximately 30 μM [approximately 19,194 ng/ml] (Table 1 and S1 Fig). Interestingly, when the treatment of L4 with both drugs was prolonged, the IC50 values for the inhibition of L4 motility on day 11 with ivermectin and moxidectin were 0.444 μM and 0.380 μM, respectively. Significantly, from the prospect of employing both drugs for prophylaxis against new infections with O. volvulus, moxidectin (8 mg) has an advantage as it achieves a maximum plasma concentration of 77.2 ± 17.8 ng/ml, is metabolized minimally, and has a half-life time of 40.9 ± 18.25 days with an area under the curve (AUC) of 4,717 ± 1,494 ng*h/ml in healthy individuals [32], which covers the experimental IC50 achieved by moxidectin for inhibiting both L3 molting and L4 motility, and the IC90 for L3s. In comparison, ivermectin reaches a maximum plasma concentration of 54.4 ± 12.2 ng/ml with a half-life of 1.5 ± 0.43 days and an AUC of 3,180 ± 1,390 ng*h/ml in healthy humans [33], which only covers the IC50 for inhibiting molting of L3 and motility of L4. We therefore reason that based on the significantly improved pharmacokinetic profile of moxidectin and its efficacy against both L3 and L4 larvae in vitro (Table 1), it might have a better “prophylactic” profile than ivermectin for its potential to interrupt the development of new O. volvulus infections, and thus ultimately affect transmission and further support the elimination of onchocerciasis. Adding to moxidectin’s significance, in dogs, it is a highly effective prophylactic drug against ivermectin-resistant D. immitis strains [19], an important attribute in the event that a suboptimal responsiveness to ivermectin treatment becomes more widespread in the onchocerciasis endemic regions of Africa. Testing the potential effect of moxidectin on the viability or development of transmitted L3 larvae was already recommended by Awadzi and colleagues in 2014 [34], when the excellent half-life of moxidectin in patients with onchocerciasis was realized. We have to acknowledge, however, that the key parameters that can predict the potency of a drug is actually a combination of exposure (drug concentrations) at the site of action and the duration of that exposure that is above the determined IC50/IC90. As we have access to only the AUC, half-life, and Cmax data for each of the in vitro–tested drugs, the use of plasma concentrations for predicting the anticipated potency of these putative “prophylactic” drugs in vivo has to be further assessed with care during clinical trials.Table 1Inhibition of O. volvulus L3 molting and L4 motility in vitro by the prospective prophylactic drugs and their essential pharmacokinetic parameters at doses currently used or deemed safe for use in humans.
DrugIvermectinMoxidectinAlbendazoleOxfendazoleEmodepside
Albendazole sulfoxide
IC50 μM
(conc in ng/ml)
IC90 μM
(conc in ng/ml)
IC50 μM
(conc in ng/ml)
IC90 μM
(conc in ng/ml)
IC50 μM
(conc in ng/ml)
IC90 μM
(conc in ng/ml)
IC50 μM
(conc in ng/ml)
IC90 μM
(conc in ng/ml)
IC50 μM
(conc in ng/ml)
IC90 μM
(conc in ng/ml)
In vitro drug testing with O. volvulus larvaeInhibition of L3 moltinga1.048 (918.86 ng/ml)3.730 (2,949.1 ng/ml)0.654 (418.43 ng/ml)1.535 (985.3 ng/ml)0.007 (1.9 ng/ml)0.023 (5.8 ng/ml)0.034 (10.7 ng/ml)0.071 (22.4 ng/ml)0.0007 (0.8 ng/ml)0.002 (2.2 ng/ml)
0.008 (2.25 ng/ml)0.07 (19.69 ng/ml)
Inhibition of L4 motilityb1.38 (1,207 ng/ml)31.45 (27,521 ng/ml)1.039 (665 ng/ml)approximately 30 (approximately 19,194 ng/ml)>2 μM0.0005 (0.6 ng/ml)0.078 (87.3 ng/ml)
Pharmacokinetic profiles extracted from data collected during clinical trials in humanscDose150 μg/kg8 mg400 mg15 mg/kg30 mg/kg1 mg40 mg
Cmax (plasma) ng/ml54.4 ± 12.277.2 ± 17.824.5288d6,250 ± 1,3905,300 ± 1,69018.6434
Half-life t1/2 (h)36.6 ± 10.2981 ± 4381.538.56d9.97 ± 2.229.82 ± 3.4642.7392
AUC (ng*h/ml)3,180 ± 1,3904,717 ± 1,494733,418d99,500 ± 2,44078,300 ± 2,8301003,320
Citations[33][32]e[41][42][43]
Open in a separate windowaO. volvulus L3 obtained from infected Simulium sp. were washed and distributed at n = approximately 10 larvae per well and cocultured in contact with naïve human peripheral blood mononuclear cells for a period of 6 days with or without the respective drugs in vitro (S1 Text) and as previously described [13,30]. Ivermectin (PHR1380, Sigma-Aldrich, St. Louis, Missouri, United States of America) and moxidectin (PHR1827, Sigma-Aldrich) were tested in the range of 0.01–10 μM; albendazole (A4673, Sigma-Aldrich), albendazole sulfoxide (35395, Sigma-Aldrich), and oxfendazole (31476, Sigma-Aldrich) in the range of 1–3 μM; and emodepside (Bayer) in the range of 0.3–1 μM using 3-fold dilutions. On day 6, molting of L3 worms was recorded. Each condition was tested in duplicate and repeated at least once. The IC50 and IC90 were derived from nonlinear regression (curve fit) analysis on GraphPad Prism 6 with 95% confidence intervals.bL3s were allowed to molt to L4 in the presence of PBMCs and on day 6 when molting was complete the L4 larvae were collected and distributed at 6–8 worms per well and treated with the respective concentrations of drugs [ivermectin and moxidectin: 0.01–30 μM at 3-fold dilutions and emodepside: 0.03–3 μM at 10-fold dilutions and 10 μM] for a period of 6 days. Inhibition of O. volvulus L4 motility was recorded as described [13,30]; representative videos of motility and inhibited motility can be viewed in Voronin and colleagues [30], S1–S3 Videos. Each condition was tested in duplicate and repeated at least once. The IC50 and IC90 were derived from nonlinear regression (curve fit) analysis on GraphPad Prism 6 with 95% confidence intervals.cInformation regarding the pharmacokinetic profiles of each drug was extracted from public data collected during the corresponding clinical trial(s) in humans, which are also referenced.dPharmacokinetic parameters of albendazole sulfoxide, the predominant metabolite of albendazole.eAdditional pharmacokinetics parameters for moxidectin not only in heathy individual but also in those living in Africa can be found on the moxidectin FDA prescribing information website: https://www.drugs.com/pro/moxidectin.html. In patients with onchocerciasis, it is reported that a single dose of moxidectin (8 mg) achieves a maximum plasma concentration of 63.1 ± 20.0 ng/ml, and it has a half-life time of 559 ± 525 days with an AUC of 2,738 ± 1,606 ng*h/ml.AUC, area under the curve; Cmax, maximum plasma concentration.The prospects for identifying additional “prophylactic” drugs against O. volvulus increased when we tested 3 other drugs: albendazole, already in use for controlling helminth infections in humans; and oxfendazole and emodepside, being tested by the Drugs for Neglected Diseases initiative (DNDi) as potential repurposed macrofilaricidal drugs for human indications [21]. Albendazole is a primary drug of choice for MDA treatment of soil-transmitted helminths (STH; hookworms, whipworms [in combination with oxantel pamoate], and ascarids) [35], as well as for the elimination of lymphatic filariasis in Africa when used in combination with ivermectin [36]. Oxfendazole, a member of the benzimidazole family, is currently indicated for the treatment of a range of lung and gastrointestinal parasites in cattle and other veterinary parasites and is favorably considered for the treatment and control of helminth infections in humans [37]. Emodepside, an anthelmintic drug of the cyclooctadepsipeptide class, is used in combination with praziquantel to treat a range of gastrointestinal nematodes in dogs and cats [3840].We found that all 3 drugs were highly effective at inhibiting the molting of O. volvulus, even more than ivermectin or moxidectin. The IC50 for inhibition of L3 molting with albendazole was 7 nM [1.9 ng/ml], and the IC90 was 23 nM [5.8 ng/ml]. The IC50 for inhibition of L3 molting with oxfendazole was 34 nM [10.7 ng/ml], and the IC90 was 71 nM [22.4 ng/ml] (Table 1 and S1 Fig). Albendazole and oxfendazole were less effective at inhibiting the motility of L4s, both having IC50 >2 μM (Table 1). In previous studies, we reported that tubulin-binding drugs (flubendazole and oxfendazole) affected the motility of L4s and L5s only after repeated treatments over 14 days in culture [13,30]. Hence, both drugs might be more effective against L3s than L4s, a stage that may require prolonged treatments and further evaluation with future studies. Albendazole is used for STH treatment as a single dose of 400 mg. At this dose, it reaches a maximum plasma concentration of 24.5 ng/ml with a half-life time of 1.53 hours (AUC of 73 ng*h/ml) [41], which covers the IC90 for inhibition of L3 molting. In comparison, albendazole sulfoxide, an important active metabolite of albendazole, had a much improved maximum plasma concentration of 288 ng/ml with a half-life time of 8.56 hours (AUC of 3,418 ng*h/ml) than albendazole [41] (Table 1), and which covers the IC50 of 8 nM [2.25 ng/ml] and IC90 of 70 nM [19.69 ng/ml] for inhibition of L3 molting in vitro. Oxfendazole, when administered at the doses currently being tested for efficacy against trichuriasis (whipworm infection), 30 mg/kg and 15 mg/kg, achieved a maximum plasma concentration of 5,300 ± 1,690 and 6,250 ± 1,390 ng/ml, respectively, with a half-life time of approximately 9.9 hours (AUC: 78,300 ± 2,830 to 99,500 ± 2,440 ng*h/ml) (Table 1) [42], both of which cover the IC90 for inhibition of L3 molting. Hence, from the perspective of preventing newly established infections with O. volvulus L3 by inhibiting their molting, oxfendazole and albendazole are additional compelling candidates to consider.Intriguingly, emodepside was the most effective drug on both L3s and L4s; it inhibited molting with an IC50 of 0.7 nM [0.8 ng/ml] (which is 10, 48.5, and approximately 1,000 times more potent than albendazole, oxfendazole, and moxidectin, respectively) and an IC90 of 2 nM [2.2 ng/ml]. Importantly, it also inhibited the motility of L4s by day 6 with an IC50 of 0.5 nM [0.6 ng/ml] and an IC90 of 78 nM [87.3 ng/ml] (Table 1 and S1 Fig), which is also more potent than the other drugs. In the ascending dose (1 to 40 mg) human clinical trial (NCT02661178), emodepside achieved a maximum plasma concentration in the range of 18.6 to 595 ng/ml, AUC of 100 to 4,112 ng*h/ml, and half-life of 1.7 to 24.6 days depending on the dose administered, and all doses were well-tolerated (Table 1) [43]. Considering that the IC90 for inhibition of L3 molting and L4 motility in vitro are 2 nM and 78 nM (Table 1 and S1 Fig), respectively, these values are already covered by the PK profile of the drug starting at 2.5 mg. Hence, the clinical trials for emodepside as a macrofilaricidal drug, if efficacious at 2.5 mg or above, could have additional implications in terms of utilizing emodepside for prophylactic potential.We propose that all 5 drugs are effective against the early stages of O. volvulus based on their efficacy (IC50/IC90) in vitro. However, based on their known pharmacokinetic profiles in humans, they can be prioritized for future evaluation for their utility for prophylactic activity in humans as follows: emodepside > moxidectin > albendazole > oxfendazole > ivermectin. Moreover, we believe that the addition of some of these putative “prophylactic” drugs individually or in combination with the current MDA regimens against onchocerciasis would also align well with the integrated goals of the Expanded Special Project for Elimination of Neglected Tropical Diseases and possibly also expedite the elimination goals of one of the other 6 neglected tropical diseases amenable to MDA: the STH [44]. All 5 of these drugs are broad-spectrum anthelmintic drugs that are effective against STH infections [4549], and thus may also benefit MDA programs aimed at controlling STH infections. The effects of MDA with ivermectin or albendazole on STHs (hookworms, Ascaris lumbricoides, and Trichuris trichiura) have already been explored in clinical studies [45,47,50] and were shown to have a significant impact on the STH infection rates in the treated communities. One dose of moxidectin (8 mg) in combination with albendazole (400 mg) was as effective as a combination of albendazole and oxantel pamoate (currently the most efficacious treatment against T. trichiura) in reducing fecal T. trichiura egg counts [46]. Notably, oxfendazole is also being tested for its effectiveness in humans against trichuriasis (NCT03435718). Additionally, emodepside was shown to not only have a strong inhibitory activity against adult STH worms in animal models with an ED50 of less than 1.5 mg/kg, but also against STH larval stages in vitro with IC50 <4 μM for L3s [49].We could envision that a single drug, a combination of any of these 5 drugs, or just those we have prioritized (moxidectin and emodepside), when administered also for prophylaxis against the development of new O. volvulus infection, would also protect against new STH infections. Broad-spectrum chemoprophylaxis of nematode infections in humans could potentially also save on costs and time invested toward elimination of co-endemic parasites through the administration of a combination of drugs. Moreover, considering the time-consuming process of drug discovery, the heavy costs incurred, and the excessive failure rates, the prospect of repurposing commercially available drugs used for other human or veterinary diseases for the prophylaxis of O. volvulus infection is an attractive one [31,5154]. Repurposing of drugs could also accelerate the approval timeline for new drug indications since information regarding mechanism, dosing, toxicity, and metabolism would be readily available.In summary, our O. volvulus in vitro drug testing studies reinforce the “old” proposition of employing MDA drugs for prophylactic strategies as well, inhibiting the development of new infections with O. volvulus in the endemic regions under MDA. We report for the first time that in vitro, emodepside, moxidectin, and ivermectin have very promising inhibitory effect on both L3s and L4s, with albendazole and oxfendazole for additional consideration. Importantly, considering that the L4 larvae are longer lived as compared to the L3 stage, and hence the more feasible target against the establishment of new infections, we believe that targeting the L4 stage would be an invaluable tool toward advancing sustainable elimination goals for onchocerciasis. Moxidectin and emodepside with their superior half-life and pharmacokinetic profiles in humans and their efficacy in vitro against both L3 and L4 stages of the parasite seem to show the most promise for this purpose. Of significance, the doses required to provide exposures that would cover the IC90 achieved by these 2 drugs in vitro against L3 and emodepside against L4 have been shown to be well-tolerated in humans (Table 1). Crucially, as these new drugs are rolled out for human use as microfilaricidal and/or macrofilaricidal drugs, it would be important to add to the clinical protocols to also observe their effects on the development of new infections in populations that are exposed to active transmission using serological assays that can predict new infections and distinguish them from earlier infections [55]. This could potentially reveal valuable information to foster the development of more complementary elimination programs that not only target the microfilariae (moxidectin) and the adult worms (emodepside) but also the other infectious stages of the parasite, with their effects on STH being an added advantage.Mathematical modeling has long influenced the design of intervention policies for onchocerciasis and predicted the potential outcomes of various regimens used by the elimination programs and the feasibility of elimination [5660]. We believe that a revised mathematical model that also takes into account the additional aspect of targeting L3 and L4 stages could be helpful to assess the enhanced impact this complementary tool might have in advancing the goal of elimination, and accordingly support a revised policy for operational intervention programs first for onchocerciasis, and perhaps also as a pan-nematode control measure, by the decision-making bodies [7,61,62]. Given that in human clinical trials in which infected people were treated quarterly with ivermectin, there was an indication of a considerable trend of reduced number of newly formed nodules, it becomes apparent that the recommendation for such a revised regimen might also support protection from new infections. Clinical trials to assess the efficacy of biannual doses of ivermectin or moxidectin versus annual doses of these drugs against onchocerciasis have been already initiated (NCT03876262). Alternatively, increasing the frequency of future treatments with moxidectin and/or emodepside to biannual or quarterly treatment and/or using them in combinations could also improve their chemotherapeutic potential by targeting multiple stages of the parasite, thus increasing all the control potential of these new MDA drugs on multiple stages of the parasite and ultimately support not only a faster timeline but also sustained elimination.  相似文献   

14.
In this Perspective, Shivani Misra and Jose C Florez discuss the application of precision medicine tools in under-represented populations.

People of South Asian ancestry carry a 3-fold higher risk of developing type 2 diabetes (T2D) than white European individuals [1], with the disease typically manifesting a decade earlier [2] and at a leaner body mass index (BMI) [3]. The South Asian population is often considered as a uniform group, but significant heterogeneity in the prevalence of T2D and its phenotype manifestations across south Asia exists, with a higher prevalence in those from Bangladeshi and Pakistani communities [4]. Genome-wide association studies (GWAS) have not fully explained the excess risk observed in South Asian individuals [5,6], and attention has turned to strategies through which genetic information may be leveraged for clinical benefit, such as generating an aggregate of weighted single nucleotide polymorphisms (SNPs) that capture the overall genetic burden for a trait into a polygenic score (PS) (sometimes described as a polygenic risk score) [7]. However, constructing a PS remains challenging in populations that are underrepresented in GWAS.In the accompanying article in PLOS Medicine [8], Hodgson and colleagues investigate the use of a PS to predict T2D in the Genes & Health (G&H) cohort, addressing a key knowledge gap in the applicability of such tools in underrepresented ethnicities. G&H is a pioneering community-based cohort of approximately 48,000 participants of predominantly British Bangladeshi and Pakistani heritage combining genetic and longitudinal electronic healthcare record data. They first assessed the transferability of known T2D genetic risk loci in G&H and constructed a PS using variants from a multi-ancestry GWAS, adjusting the scores for Pakistani and Bangladeshi individuals and selecting the one with the highest odds for prediction. This score was then integrated with 3 versions of a clinical model (QDiabetes) to predict T2D onset over 10 years in 13,642 individuals diabetes free at baseline. The authors show that incorporation of a PS with QDiabetes provided better discrimination of progression to T2D, especially in those developing T2D under 40 years of age and in women with a history of gestational diabetes. Finally, they incorporated the PS into cluster analyses of baseline routine clinical characteristics, replicating clusters defined in European populations and identifying a cluster resembling a subgroup of severe insulin deficiency. This study significantly advances the field on the transferability of PSs, reproducibility of T2D clusters, and clinical translation of these findings to precision medicine for diabetes.  相似文献   

15.
In this Perspective, Fiona Bragg and Zhengming Chen discuss the burden of diabetes in the Chinese Population.

The worldwide epidemic of diabetes continues to grow [1]. In China, the rise in prevalence has been notably rapid; about 12% of the adult population has diabetes [2], accounting for almost one quarter of cases worldwide [1] and representing a 10-fold increase over the last 3 to 4 decades. It is appropriate, therefore, that diabetes—both prevention and management—is a major focus of current health policy initiatives in China [3,4], and their success depends on reliable quantification of the burden of diabetes. Commonly used measures such as prevalence and incidence fail to capture excess mortality risks or differences in life expectancy in diabetes [5]. Moreover, they may be less easily interpreted by policy makers and affected individuals. Estimates of lifetime risks and life years spent living with diabetes in an accompanying study by Luk and colleagues provide a valuable new perspective on the burden of diabetes in the Chinese population [6].The study used Hong Kong territory-wide electronic health records data for 2.6 million adults. Using a Markov chain model and Monte-Carlo simulations, Luk and colleagues estimated age- and sex-specific lifetime risks of diabetes (incorporating both clinically diagnosed and undiagnosed diabetes) and remaining life years spent with diabetes. Their findings showed a lifetime risk of 65.9% and 12.7 years of life living with diabetes for an average 20-year old with normoglycaemia. For an average 20-year old with prediabetes the corresponding estimates were 88.0% and 32.5 years, respectively. In other words, 6 out of 10 20-year olds with normoglycaemia and 9 out of 10 with prediabetes would be expected to develop diabetes in their lifetime. The estimated lifetime risks declined with increasing age and were higher among women than men at all ages, likely reflecting women’s higher life expectancy.These estimated lifetime risks are striking and concerning. Moreover, they are notably higher than western population estimates [710], including those considering both diagnosed and undiagnosed diabetes [9,10]. An Australian study estimated that 38% of 25-year olds would develop diabetes in their lifetime [10]. Another study in the Netherlands reported 31.3% and 74.0% probabilities of developing diabetes in the remaining lifetime for individuals aged 45 years without diabetes and with prediabetes, respectively [9]. Diabetes incidence and overall mortality influence population lifetime risks. Differences in the glycaemic indicators used to identify undiagnosed diabetes may have contributed to differences between studies in diabetes incidence. In the study by Luk and colleagues, a combination of fasting plasma glucose (FPG), HbA1c levels and oral glucose tolerance testing (OGTT) was used, while in the Australian [10] and the Netherlands [9] studies, they used FPG/OGTT and mainly FPG, respectively. However, it is unlikely these differences would fully account for the large disparities seen in lifetime risk. Similarly, differences between life expectancy in Hong Kong (84.8 years), Australia (83.4 years), and the Netherlands (82.2 years) are too small to explain the differences. Interestingly, the high lifetime risks observed in Hong Kong were more comparable to those in the Indian population, estimated at 55.5% and 64.6%, respectively, among 20-year-old men and women [11]. The typical type 2 diabetes (T2D) phenotype in these Asian populations may partly explain their higher estimated lifetime risks. More specifically, T2D in both Chinese and Indian populations is characterised by onset among younger and less adipose individuals than typically observed in western populations, exacerbated by rapid urbanisation and associated unhealthy lifestyles [12].However, aspects of Luk and colleagues’ study design may have overestimated lifetime diabetes risks. Chief among these is the data source used and associated selection bias. The Hong Kong Diabetes Surveillance Database includes only individuals who have ever had a plasma glucose or HbA1c measurement undertaken in a local health authority facility. Since measurement of glycaemic indicators is more likely among individuals at greater current or future risk of dysglycaemic states, this will have inflated estimates of lifetime risk and life years spent with diabetes. Although replication was undertaken by the study authors to address this bias in the smaller China Health and Retirement Longitudinal Survey (CHARLS) cohort, it does not fully allay these concerns, with modestly lower estimated lifetime diabetes risks in the CHARLS cohort, even after accounting for its higher mortality. A further limitation is their consideration of transition to dysglycaemic states as irreversible. Although data on long-term transition between glycaemic states are lacking, reversion from prediabetes (and less commonly diabetes) to normoglycaemia is well recognised, e.g., through lifestyle interventions [13].Large-scale population-based cohort studies could valuably address many of the limitations described [14]. Furthermore, lifetime risks are, by definition, population-based and represent the risk of an average person in the population, limiting their value for communicating long-term disease risks to specific individuals. However, the extensive phenotyping (e.g., adiposity) characteristic of many large contemporary cohorts [14] would facilitate incorporation of risk factors into lifetime risk estimates, enhancing their relevance to individuals. Previous studies have found greater lifetime risks of diabetes associated with adiposity [9,11], and this approach could be extended to incorporate other established, as well as more novel (e.g., genetic), risk factors. This is arguably of particular relevance to later-onset chronic conditions, such as T2D, in which changes in risk factors during middle age can influence lifetime risks. A valuable extension of Luk and colleagues’ study will be estimation of risk factor specific lifetime diabetes risks for the Chinese population.Importantly, the limitations described do not detract from the enormity and importance of the challenge diabetes poses for China, including Hong Kong, and the estimates presented by Luk and colleagues provide valuable impetus for action. The disease burden insights can inform treatment programmes and enhance understanding of current and future impacts of diabetes and associated complications on the healthcare system. Moreover, T2D is preventable, and arguably, the greatest value of these estimated lifetime risks is in highlighting the need for, and informing the planning and provision of, diabetes primary prevention programmes. This includes identification of high-risk individuals, such as those with prediabetes, who are most likely to benefit from prevention interventions. However, the magnitude of the estimated lifetime diabetes risks, including among the large proportion of the population in a normoglycaemic state, additionally demonstrates the need for population-level prevention approaches, including environmental, structural, and fiscal strategies. Without such actions, the individual and societal consequences of diabetes for present and future generations in Hong Kong, as well as mainland China, will be immense.  相似文献   

16.
17.
Julie Bines discusses an accompanying study by Sheila Isanaka and colleagues on nutrient supplementation and immune responses to rotavirus vaccination.

The introduction of rotavirus vaccines into the national immunization programs globally has made a major impact on diarrhea-related hospitalizations and deaths. By 2020, 107 countries had introduced rotavirus vaccines, either nationally or regionally, including 52 countries in Africa and Asia eligible for funding through the Global Alliance for Vaccines and Immunization (Gavi) [1]. This represents a major step toward reducing under 5-year child mortality, the impact of rotavirus disease on child health, and the economic burden on families and the healthcare system. A remaining challenge is the lower vaccine protective efficacy observed in children in low- and middle-income countries (LMICs) where the mortality and hospitalizations due to severe rotavirus disease still occur [1]. The role of nutrition in influencing the immune response to a rotavirus vaccine is the focus of the accompanying paper by Isanaka and colleagues published in this issue of PLOS Medicine [2].Understanding why over 87% of children vaccinated with a rotavirus vaccine in low child mortality countries are protected against severe rotavirus disease compared to approximately 44% (27% to 59%) of children in high child mortality countries is not well understood [3]. As an orally administered vaccine, initial focus has been on factors that could neutralize the live vaccine virus within the gut lumen. Most rotavirus vaccines are administered in a buffered formulation to reduce the risk of neutralization of the vaccine virus by gastric acid [4]. In early clinical trials, fasting prior to vaccination was applied in an effort to reduce the potential impact of breast milk antibodies. This is now not considered necessary [5]. A difference in the gut microbiome in infants from high-income and LMICs has been observed, although the administration of a probiotic did not result in improved rotavirus vaccine immunogenicity [6,7]. Rotaviruses use histo-blood group antigens present on the gut epithelial surface in the initial phase of virus attachment and cellular entry [8]. It has been proposed that population variability in histo-blood group antigen phenotype, specifically Lewis and secretor status, may explain the genotype diversity of rotavirus between regions and the responses observed to live oral rotavirus vaccines that may be VP4 [P] genotype dependent [8].Childhood malnutrition is associated with reduced immune responses to a range of infections, and in the immune response to vaccines, including rotavirus vaccines [9]. Macro and/or micronutrient deficiencies have been linked to a range of abnormalities in T and B cell function, mucosal immunity, cytokine production, and responses [9]. However, there are limited data on the impact of maternal nutritional supplements on the immune responses following vaccination of their infants. Isanaka and colleagues’ cluster-randomized study was nested within a double-blind, placebo control vaccine efficacy trial. It evaluated the effect of 3 different maternal nutritional supplements on serum antirotavirus immunoglobulin A (IgA) seroconversion following administration of 3 doses of the oral rotavirus vaccine Rotasiil (G1, G2, G3, G4, G9) in infants in Niger [2]. The daily supplements were commenced prior to 30 week’s gestation in a population of women at risk of macro- and micro-nutrient malnutrition, although maternal anthropometry and micronutrient status before and after supplementation is not reported. The supplement options included the “standard of care” iron−folic acid (IFA) supplement, a multi-micronutrient (MMN) supplement at levels at or double the US recommended dietary allowance for pregnant women, or the same MMN supplement with an additional energy and protein component. As all groups received a supplement, this study was designed to provide a comparison between supplement groups rather than a comparison with no supplement. Across all supplement groups, the serum antirotavirus IgA seroconversion following administration of 3 doses of Rotasiil was modest at 39.6% and only 10% greater than that observed in the placebo group (29.0%). The rate of seroconversion did not differ between supplement groups, although serum IgA geometric mean titres were not reported. In similar study in The Gambia, an enhanced antibody response to the diphtheria-tetanus-pertussis (DPT) vaccine was observed in the infants of mothers who had received a prenatal MMN and protein−energy supplement, when compared to those who received the “standard of care” iron−folate supplement [10]. Of note, the supplement used in The Gambia study contained more energy and protein when compared to the MMN plus energy and protein used in this study in Niger (The Gambia study versus Niger study; energy [kcal]: 746 versus 237; protein [grams]: 20.8 versus 5.2; lipids [grams]: 52.6 versus 20). Whether the differences reported in vaccine immune responses between these 2 studies reflect these differences in the composition of the supplement, differences specific to the vaccine (DPT versus rotavirus vaccine), study sample size or characteristics of the study population requires further study.Rotavirus vaccines save lives and prevent hospitalizations due to rotavirus disease in children. Efforts to improve the level of protection provided by rotavirus vaccines, particularly in LMICs, have the potential to maximize the impact on these vaccines on global child health. Improving the nutritional status of infants through the provision of macro- and micro-nutrient supplements to pregnant mothers in high-risk populations may optimize immune responses to rotavirus vaccines; however, the specific composition of the prenatal supplement requires further investigation.  相似文献   

18.
Peter Figueroa and co-authors advocate for equity in the worldwide provision of COVID-19 vaccines.

Many may not be aware of the full extent of global inequity in the rollout of Coronavirus Disease 2019 (COVID-19) vaccines in response to the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) pandemic. As of June 20, 2021, only 0.9% of those living in low-income countries and less than 10% of those in low- and middle-income countries (LMICs) had received at least 1 dose of a COVID-19 vaccine compared with 43% of the population living in high-income countries (HICs) [1] (Fig 1). Only 2.4% of the population of Africa had been vaccinated compared with 41% of North America and 38% of Europe [1,2] (S1 Fig). Primarily due to the inability to access COVID-19 vaccines, less than 10% of the population in as many as 85 LMICs had been vaccinated compared with over 60% of the population in 26 HICs [1]. Only 10 countries account for more than 75% of all COVID-19 vaccines administered [3]. This striking and ongoing inequity has occurred despite the explicit ethical principles affirming equity of access to COVID-19 vaccines articulated in WHO SAGE values framework [4,5] prepared in mid-2020, well prior to the availability of COVID-19 vaccines.Open in a separate windowFig 1Proportion of people vaccinated with at least 1 dose of COVID-19 vaccine by income (April 14 to June 23, 2021).Note: Data on China appeared on the database on June 9, hence the jump in upper middle-income countries. COVID-19, Coronavirus Disease 2019. Source: https://ourworldindata.org/covid-vaccinations.The COVID-19 pandemic highlights the grave inequity and inadequacy of the global preparedness and response to serious emerging infections. The establishment of the Coalition for Epidemic Preparedness Innovations (CEPI) in 2018, the Access to COVID-19 Tools Accelerator (ACT-A), and the COVID-19 Vaccines Global Access (COVAX) Facility in April 2020 and the rapid development of COVID-19 vaccines were all positive and extraordinary developments [6]. The COVAX Facility, as of June 2021, has delivered approximately 83 million vaccine doses to 75 countries, representing approximately 4% of the global supply, and one-fifth of this was for HICs [7]. The COVAX Facility has been challenged to meet its supply commitments to LMICs due to insufficient access to doses of COVID-19 vaccines with the prerequisite WHO emergency use listing (EUL) or, under exceptional circumstances, product approval by a stringent regulatory authority (SRA) [8,9]. Because of the anticipated insufficient COVID-19 vaccine supply through the COVAX Facility, the majority of nonvaccine-producing LMIC countries made the decision, early in the COVID-19 pandemic, to secure and use vaccines produced in China or Russia prior to receipt of WHO EUL or SRA approval. Most of the vaccines used in LMICs as of June 20, 2021 (nearly 1.5 billion doses of the 2.6 billion doses administered) were neither WHO EUL or SRA approved at the time they were given [10]. This may raise possible concerns with respect to the effectiveness, safety, and acceptability of individual vaccines used by many countries [8,9].  相似文献   

19.
In this Perspective, Dimitrios Sagris, Stephanie Harrison, and Gregory Lip discuss new evidence concerning the paradoxical relationship between circulating lipids and atrial fibrillation.

Although the prevalence of cardiovascular comorbidities and associated risk factors such as diabetes mellitus, chronic kidney disease, and obesity increase with age, lipid levels may go through several changes over the course of a lifetime, associated with sex, ethnicity, and metabolic profile [1]. The association between lipid levels and atherosclerotic disease is well established [2], but the association between lipid levels and incidence of atrial fibrillation (AF) has not been fully elucidated.Despite the association between high lipoprotein levels and the increased risk of atherosclerosis and coronary artery disease (CAD), which, in turn, may lead to an increased risk of AF [3], several studies have suggested that high levels of low-density lipoprotein cholesterol (LDL-C), total cholesterol (TC), and high-density lipoprotein cholesterol (HDL-C) are associated with a lower risk of AF [4]. The clinical significance and pathophysiological mechanisms of this paradoxical inverse association between lipid levels and AF risk remain unclear [4].In an accompanying study in PLOS Medicine, Mozhu Ding and colleagues conducted a large population-based study of >65,000 adults aged 45 to 60 years without any history of cardiovascular disease, using data from the Swedish National Patient Register and Cause of Death Register [5]. Using International Classification of Diseases (ICD) codes from discharge diagnoses of hospital visits and causes of death captured in these registries, participants were followed up for up to 35 years for incident AF. Higher levels of TC and LDL-C were associated with lower risk of AF within the first 5 years (hazard ratios [HRs]: 0.61, 95% confidence intervals [CIs]: 0.41 to –0.99; HR: 0.64, 95% CI: 0.45 to 0.92), but the effect was attenuated after 5 years of follow-up. Conversely, lower levels of HDL-C, high triglyceride (TG) levels and high TG/HDL-C ratio were consistently associated with a higher risk of AF over 3 decades of follow-up (HRs ranging from 1.13 [95% CI: 1.07 to 1.19, p < 0.001] to 1.53 [95% CI: 1.12 to 2.00]).Previous longitudinal studies have demonstrated that levels of the majority of lipoproteins increase significantly between the ages of 20 to 50 years before plateauing in older age; this pattern is observed mainly in men, while in women, increasing lipoproteins are associated with menopause [6]. These early findings were recently confirmed in a cross-sectional study in a Chinese population, in which TC and LDL-C were found to plateau between the ages of 40 and 60 in men and 60 years of age in women, before declining markedly [7]. Considering the inverse association of TC and LDL-C with aging, and the association between aging and higher prevalence of AF, this may partially explain the inverse association of TC and LDL-C with AF.Another interesting finding reported by Mozhu Ding and colleagues is the inverse association of HDL-C levels with AF incidence and the association of high TG levels and high TG/HDL-C ratio with increased risk of AF [8]. This association remained consistent for more than 10 years, suggesting a potentially strong association of HDL-C and TG with incident AF. Since HDL-C and high TG levels are important components of metabolic syndrome, this finding may demonstrate a role of metabolic syndrome and its components in the risk of AF. A recent meta-analysis of 6 cohort studies, including 30,810,460 patients, showed that metabolic syndrome and low HDL-C were associated with a significantly higher risk of AF (HR: 1.57; 95% CI: 1.40 to 1.77, and HR: 1.18; 95% CI: 1.06 to 1.32, respectively) [9]. Based on this evidence, we can speculate that a combination of low HDL-C and high non-HDL-C (i.e., TC excluding HDL-C) may have a potential association with AF risk. Nonetheless, a nationwide cross-sectional survey suggested that non-HDL-C may also be associated with a lower risk of AF [10].The associations observed in the study by Mozhu Ding and colleagues may have been related to lipid treatment therapies, but as the authors highlight, lipid-lowering medicines were uncommonly used in the first few years of the study period and it is unlikely that these could have influenced the associations observed at the beginning of the follow-up period.The association of lipid levels with incident AF remained consistent both in patients with and without heart failure (HF) or CAD. However, in the sensitivity analysis including only patients for whom data on use of medications were available, among those with HF or CAD being treated with lipid-lowering medication, the risk of AF was lower compared to those who were not treated [8]. It seems that although lipid levels are correlated with incident AF irrespective of the presence of HF or CAD, in this high-risk population, the use of lipid-lowering medication reduces the risk of AF, as was previously suggested [11].Mozhu Ding and colleagues have been able to conduct a large-scale study with a long follow-up period, and the findings agree with previous observational evidence. As with all observational studies, residual confounding may be present. In this study, baseline lipid levels were assessed, but variability over time was not examined. A previous nationwide study in Korea has suggested high variability in lipid levels is associated with a higher risk of AF development [12]. Additionally, smoking or physical activity, which represent important cardiovascular parameters, were not accounted for, which may have partially influenced the observed associations.Although the natural progression of vascular aging, chronic inflammation, and dynamic changes in cardiovascular risk factors, including dyslipidemia, may play an essential role in cardiovascular diseases and the risk of AF [13,14], the exact mechanisms of the potential inverse correlation of hyperlipidemia to AF remain elusive. The accompanying study of Mozhu Ding and colleagues supports the existing evidence on the paradoxical inverse correlation of TC, LDL-C, and HDL-C levels with the risk of future AF, providing further insights in the role of TG levels and their correlation to HDL-C levels. New insights may improve understanding of the pathophysiology behind this paradoxical observation [15]. Until then, hyperlipidemia should be assessed as part of the overall cardiovascular risk [16], and the AF paradox should not outweigh this risk.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号