首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
John Naslund and Eirini Karyotaki discuss Mark Jordans and colleagues’ accompanying research study on therapy for people with psychological distress in Nepal.

Humanitarian emergencies such as war, natural disasters, or pandemics profoundly disrupt the daily lives of those impacted and result in psychological distress and high risk of mental disorders. With increasing frequency of humanitarian emergencies over the past decade, including the most recent Coronavirus Disease 2019 (COVID-19) pandemic, there is immediate need for brief scalable interventions that can be readily delivered to at-risk population groups [1]. With the dearth of available mental health specialists, especially in low-resource settings susceptible to crises, natural disasters, or displacement, combined with fragmented or poor functioning health systems during emergencies, nonspecialists may be ideally positioned to deliver such programs [2]. Nonspecialists, such as community health workers or lay persons, do not have specialty training in mental healthcare; yet, these frontline providers often play an essential role in delivering primary care services in many low- and middle-income countries [3,4], and they are increasingly being recognized as critical for scaling up access to psychological treatments for mental disorders [5,6]. Further, in a humanitarian crisis, use of nonspecialists from the affected population offers key benefits, such as empowering community members and drawing upon the experience of facilitators [7].In an accompanying study in PLOS Medicine, Mark Jordans and colleagues demonstrate that community members with no prior mental health training could effectively deliver the WHO Group Problem Management Plus (Group PM+) program in a humanitarian setting in Nepal [8]. The research team conducted a cluster randomized controlled trial enrolling 72 wards and found that the 5-session Group PM+ delivered by nonspecialists contributed to reduction in psychological distress and depressive symptoms when compared to usual care. There may be opportunities to expand on these findings and further advance task sharing efforts in humanitarian settings.  相似文献   

2.
Ying Gue and Gregory Lip discuss the accompanying study by Ana-Catarina Pinho-Gomes and co-workers on blood pressure lowering treatment in patients with atrial fibrillation.

Atrial fibrillation (AF) is the most common sustained cardiac arrhythmia and is associated with an increased risk of major adverse cardiovascular events (MACE) [1]. Patients with AF typically have other concomitant cardiovascular risk factors—hypertension being one of the commonly associated conditions with a prevalence of up to 90% in major clinical trials of AF [2]. Not only is hypertension common in AF, but it is also an independent risk factor for ischaemic and haemorrhagic strokes, thus bearing important implications for patient prognosis [3]. Therefore, optimal management of hypertension in patients with AF is vital to prevent future MACE. In the accompanying individual-participant data (IPD) meta-analysis [4], the authors from the Blood Pressure Lowering Treatment Trialists’ Collaboration (BPLTTC) aimed to investigate the effects of blood pressure (BP) lowering treatment on MACE when comparing patients with and without AF at baseline. They aimed to address 4 main questions: firstly, whether AF at baseline modifies BP treatment effects; secondly, whether associations between intensity of BP reduction and outcomes are similar with or without AF; thirdly, whether treatment effect is dependent on baseline systolic BP; and lastly, whether classes of antihypertensives have different treatment effect in AF.A total of 22 trials were included with a total of 188,570 participants and 13,266 patients with a history AF at baseline. Baseline characteristics were different between the 2 groups, with AF patients being older (mean age 70 years versus 65 years), had lower baseline BP (mean 143/84 mmHG versus 155/88 mmHg), and were more commonly prescribed diuretics (50.5% versus 23.8%), angiotensin converting enzyme inhibitors (59.6% versus 44%), beta-blockers (51.3% versus 36%), and alpha-blockers (10.7% versus 4.4%) at baseline. This reflects the more commonly associated cardiovascular comorbidities (hypertension, heart failure, and older age) in patients with AF at baseline [3,5,6].Among the authors’ findings, firstly, the mean difference in Systolic blood pressure (SBP) reduction was 7.2 mmHg in placebo-controlled studies (8 studies), 2.3 mmHg in drug–drug comparisons (12 studies), and 10.9 mmHg in more-versus-less intensive treatment trials (2 studies) with an overall difference of 3.7 mmHg. When comparing differences in SBP reduction, the authors reported no difference between patients with or without AF (3.3 mmHg versus 3.7 mmHg).Secondly, meta-regression showed that each 5 mmHg reduction in BP equated to a 10% reduction in MACE in patients with and without AF. Thirdly, authors found no evidence of difference in treatment effects at different baseline systolic BP. And lastly, there was no difference between classes of antihypertensives (renin-angiotensin-aldosterone system inhibitor (RAAS-I) and calcium channel blockers (CCB)), although this conclusion was limited by small numbers of participants with AF in these studies.The authors conclude that due to the higher risk of MACE in AF patients compared to those without AF, the same relative risk reduction with BP control translates to greater absolute risk reduction in AF patients and, therefore, more focus should be placed on addressing the associated cardiovascular risk factor such as hypertension to better improve the outcomes in patients with AF.We congratulate the authors for performing this highly relevant IPD meta-analysis to highlight the importance of the holistic management of patients with AF and the need for more evidence in this area. This thought process is echoed in the most recent European Society of Cardiology (ESC) guideline on the management of AF with a shift from managing AF, from the CC (Confirm AF and Characterise AF) to ABC (“A” Anticoagulation/Avoid stroke, “B” Better symptom control, and “C” Comorbidities/Cardiovascular risk factor management) approaches of managing AF [7].The association of BP control and reduction in MACE in patients with AF does not come as a surprise as hypertension has been linked not only with adverse cardiovascular outcomes but also with an increased risk of AF [8]. The importance of BP control has previously been shown in a large meta-analysis of 61 prospective observational studies involving 12.7 million person-years, i.e., that there is a linear relation between BP and vascular (and overall) mortality, starting from values of 115/75 mmHg [9]. BP control reduces mortality from ischaemic vascular events and haemorrhagic complications from anticoagulation treatment in patients with AF [3]. The reduction in mortality was reflected in the present study [4], although there was no differentiation between haemorrhagic and ischaemic stroke in the outcomes.One limitation of this work is the inclusion of trials involving only patients with AF. AF status being the inclusion or exclusion criteria prior to randomisation could add to the risk of selection bias within the analysis. In addition, the majority of AF participants are from the Atrial Fibrillation Clopidogrel Trial With Irbesartan for Prevention of Vascular Events (ACTIVE-I) trial dataset [10], which can bias the effect seen given the difference in patient characteristic at baseline. Similarly, including trials with only patients without AF may dilute the effects of BP lowering that could be seen in patients with AF. The authors have addressed this by performing sensitivity analyses excluding these studies which have shown comparable results, reassuring us that the impact of selection bias is not significant on the study conclusions.In the new 2020 ESC guidelines on the “ABC” approach to the management of AF [7], the management of other associated cardiovascular risk factors has become an integral component of optimal management of AF. The shift in the management of AF towards a more holistic approach is one step in the right direction as has been shown by the improvement in outcomes [11,12] and reduction in healthcare-associated costs [13]. Compliance with the “ABC” management approach requires clear, evidence-based guidelines in terms of treatment targets. With regard to hypertension, the currently recommended BP target (≤130/80) is based upon the current ESC hypertension guidelines [14] and observational data showing greatest benefit of BP between 120 and 129 systolic [1517]. Whether this target is optimal for the reduction of future MACE in patients with AF is unknown.This IPD meta-analysis by the BPLTTC has shown that the presence of AF does not alter the treatment effects of antihypertensives. BP lowering in patients with and without AF shows a corresponding reduction in MACE to a similar extent. Owing to the higher absolute risk of MACE in patients with AF, BP lowering in these patients would result in greater absolute risk reduction. This should provide sufficient evidence to convince clinicians regarding the benefits of strict BP control in patients with AF, and the consultation for patients with AF should always involve a conversation about managing hypertension, be it lifestyle modification or pharmacological treatment. However, the potential benefits (or harms) of a much lower BP target (below the recommended 130/80 mmHg) and ideal choice or combination of antihypertensives remain unanswered and would require future studies to provide further insight.  相似文献   

3.
Jean-Marc Chavatte and Georges Snounou discuss research involving controlled malaria infections.

Experimental human infections have contributed significantly to knowledge about infectious diseases [1]. In a recent report in PLOS Medicine, Rebecca Watts and colleagues followed this tradition by describing controlled human infection (CHI) with artemisinin-resistant Plasmodium falciparum [2]. Of the short list of pathogens still utilized in CHIs [1], malaria is exceptional because much of our knowledge of its natural course and its parasites’ (Plasmodium) biology derives from an extensive series of CHIs that stretches over a century. The trigger to investigate malaria through routine CHIs was provided by Julius Wagner-Jauregg. Convinced of a beneficial effect of fever on psychoses, he infected patients suffering from neurosyphilis with Plasmodium vivax and recorded unprecedented full recoveries or partial remissions in most [3]. Consequently, malariotherapy centers flourished worldwide, attracting malariologists who derived unique insights into the nature of malaria from these CHIs [4]. CHIs became so important for malaria studies that when malariotherapy was superseded by antibiotics, malariologists initiated ethically sanctioned volunteer programs.While malariotherapy relied primarily on P. vivax, P. falciparum was used mainly in the United States of America on African American patients who were refractory to P. vivax and from the 1940s, on volunteers of all ethnic backgrounds. From the mid-1970s, the majority of CHIs were initiated by P. falciparum sporozoites, principally to serve in the development of a vaccine against the preerythrocytic forms. These investigations have a minimal clinical risk as the volunteers are treated as soon as blood stages are detectable (a parasite (P) burden ≈ 10 P/μL), a level in initial infection rarely preceded by any clinical sign. By contrast, for blood stage–induced P. falciparum infections that were mainly conducted to evaluate drug efficacy, curative drug treatment was to be initiated as parasite levels neared 1,000 P/μL, but in some volunteers, these exceeded 10,000 P/μL because of the unpredictable and often fulminating multiplication of P. falciparum. Thus, by the mid-1980s, blood stage inoculations were abandoned on ethical grounds.Allan Saul and his team at the Queensland Institute of Medical Research (QIMR) broke this moratorium by bringing together 3 elements needed to embark on CHI initiated by blood stage malaria parasites. First, they elaborated a strategy to constitute a stock of infected red blood cells (RBCs) uncontaminated by other pathogens suitable to serve as an inoculum. Second, they devised a sensitive quantitative method of parasite detection (≈10 P/mL) allowing the monitoring of parasitemia over a sufficient number of replication cycles prior to the appearance of any clinical signs. Finally, a protocol ensuring the safety of the volunteers was elaborated, and ethical approval was secured. Thus, a stock of cryopreserved RBCs infected by P. falciparum 3D7 was obtained and validated [5], making it possible to conduct CHIs using a standard inoculum. Then, a limited number of studies aiming at testing the efficacy of various vaccine formulations [6] served to further develop, refine, and establish the induced blood stage model (IBSM). From the early 2010s, the team led by James McCarthy at the QIMR Berghofer Research Institute (Queensland, Australia) turned the IBSM into an important strand of malariological research, both scientific and translational.McCarthy’s team showcased the IBSM in a first study in which the parasite reduction rates of atovaquone–proguanil (Malarone) and artemether–lumefantrine (Coartem, an early artemisinin-based combination therapy or ACT) were determined [7] and found to agree with observations made in numerous clinical trials. In the new study [2], they have revisited the clearance rate of artesunate, another artemisinin derivative, but here used alone as a single dose against 2 P. falciparum cloned lines: 3D7, a sensitive line, and a resistant line Cam3.IIR539T (K13R539T in [2]) derived from an isolate collected in Cambodia [8]. In a pilot study, the authors inoculated 2 volunteers with K13R539T and established that this infection was safe and well tolerated. This provided the green light for the main study that compared the parasite clearance half-lives of these 2 lines by artesunate: that of the sensitive 3D7 line (3.2 h in 9 volunteers) was found to be half that of the resistant K13R539T line (6.5 h in 13 volunteers). This comparative study is of significance for several reasons. The P. falciparum K13R539T line is the first cloned artemisinin-resistant line that was banked and validated for use in CHI studies. Furthermore, this approach opens the potential to extend investigations to other parasite lines, to uncover the molecular mechanisms that underly artemisinin resistance, and also to investigate other biological processes of malaria parasites.The emergence and spread of parasites resistant to artemisinin derivatives poses the direst threat to the malaria control and elimination roadmap. In the mid-2000s, robust indications of parasites with reduced sensitivity to ACTs were obtained from clinical field trials of artesunate in Cambodia, which showed a prolongation of the time to parasite clearance [9,10]. Unexpectedly, this was not accompanied by an increase in the in vitro–determined IC50 of artesunate [9,10]. Thereafter, the ring-stage survival assay that correlated with parasite clearance dynamics was developed [11], and the combination of these 2 parameters now defines resistance to artemisinin. This unorthodox feature initially led some in the malaria community to question whether this represents “true” resistance or even the beginning of it. Those still unconvinced favor resistance to the companion drug administered de facto by ACT treatment as an alternative interpretation of slower parasite clearance. This might well be true for some parasites, but both interpretations are not mutually exclusive. The study of Watts and colleagues [2] does provide a clear-cut demonstration that the delayed clearance phenomenon can be clearly attributed to exposure to artesunate alone, although it did not demonstrate that the particular mutation in the Pfkelch13 gene alone was responsible for the extended clearance half-life. It is highly likely that the initial reticence to view delayed clearance as an indication, or even a forerunner, of artemisinin resistance contributed to missing a window of opportunity to stop the spread of artemisinin-resistant P. falciparum in Cambodia. Artemisinin-resistant parasites have by now spread to other Southeast Asian countries [12].From a historical perspective, it is gratifying to note the revival of experimental infections in humans and to appreciate their significant contributions to accelerating drug discovery and vaccine development. Their value, however, should not be limited to translational ends, because they offer a unique opportunity to define the breadth of the biological characteristics of other malaria parasite species of humans that are not amenable to in vitro investigations. McCarthy’s team has recently derived cryopreserved stocks of P. vivax–[13,14] and Plasmodium malariae–[15] infected RBCs for future CHIs. Observations from CHI are justifiably restricted by ethical considerations, yet they considerably advance our understanding of malaria.  相似文献   

4.
Beryne Odeny reports from the CUGH 2021 virtual conference.

The first virtual Consortium of Universities for Global Health (CUGH) 2021 conference was held in March, 2021 [1]. Two weeks of satellite symposia culminated in this highly prestigious conference, which drew an eclectic group of renowned speakers, global health leaders, program implementers, researchers, and students from across the globe. There were more than 5000 delegates from diverse disciplines including public health, politics, education, medicine, planetary health, and finance. Top of the agenda was addressing critical gaps in global health and development against the backdrop of the COVID-19 pandemic.CUGH is an organization of over 170 academic institutions and organizations throughout the world, engaged in addressing global health challenges [1]. The 2021 conference was meticulously and creatively planned as was evidenced by the dynamic virtual platform, which hosted several global leader interviews, general sessions, 40 concurrent sessions, 7 plenary sessions, over 700 poster programs, and the Pulitzer Center Film festivals–yes, movies were on the menu [2]. Best of all, the platform held up, with minimal technical difficulties. The conference agenda had curated sessions carefully customized to varying attendee interests and expertise. Participants could seamlessly and discreetly shuttle between sessions.The inaugural interviews, with Dr. Anthony Fauci of the United States and Dr. Hugo Lopez-Gatell of Mexico, set the tone with emphasis on a much-needed global response to the ongoing pandemic. “2020 was a watershed moment in Global Health,” said Dr. Fauci. The COVID-19 pandemic indiscriminately unveiled the fragility of health systems in high income countries (HIC) and low- and middle-income countries (LMICs) alike. He unpacked the origins, evolution, and contention around current public health mandates such as mask wearing. He discussed vaccines–exploring vaccine manufacturing in LMICs, open patents, implications of emerging COVID-19 variants, and advice on curbing the prevailing vaccine infodemic (i.e., pandemic of misinformation) [24]. Dr. Lopez-Gatell described the pandemic as a “massive social event” fueled by deficits in health systems, politics, and governance, and by the growing tide of non-communicable diseases (NCDs) [5]. In a brief video recording, Dr. Tedros Adhanom Ghebreyesus, WHO’s Director-General, implored global partners to sign the COVID-19 Declaration on vaccine equity which he termed “the defining challenge of 2021” [6].The post-pandemic forecast for global health was dire. COVID-19 has disrupted decades of progress toward attainment of Universal Health Care (UHC) and it will be doubly difficult to restore, by 2035, health indicators to their levels prior to the pandemic [79]. A modelling study by Dr. Wenhui Mao of Duke University showed that, even in the most optimistic scenario, it may not be possible to achieve UHC in the next decade without breakthrough technologies and exceptional political commitment. Among four critical indicators of TB mortality rate, HIV mortality rate, under 5 mortality ratio, and maternal mortality ratio, Dr. Mao found that only the HIV indicator had potential for recovery by 2035.The metaphorical elephant in the room, and now its opposite, “the elephant not in the room”, respectively encapsulate two themes: neocolonialism and equity, especially for marginalized groups. Neocolonialism–a progeny of colonialism–resulting from sustained global North-South power imbalances, manifests in low prioritization of the most pressing challenges and diseases in LMICs. Equity was a poignant theme across the CUGH sessions and satellite symposia. Sessions were dedicated to exploring the hegemonic structures and institutional systems that underpin adverse health system performance and outcomes. A sampling of wide-ranging topics on global challenges exacerbated by neocolonialism and inequities comprised: a) elevating the visibility and power of researchers in LMICs, including fragile and conflict affected settings, through equitable access to funding, research autonomy and leadership, access to scholarly publishing, and senior authorship of research articles [10]; b) training next-generation global health professionals and building capacity for resource-challenged settings to address NCDs, including cancer care [5]; c) the Latin American and Caribbean health crises drawn by social gradients and inequities; d) navigating conflicting interests between public health and the corporate food industry; e) the dearth and role of women leaders in global health and in the COVID-19 response; f) the disproportionate incidence of HIV in adolescent girls and young women in sub-Saharan Africa (SSA) [11]; g) the disparate burden of neonatal mortality in LMICs and marginalized communities within HIC; and h) leveraging the power of film to evoke emotion and induce a consolidated response to global challenges. In addition, various facets of the human ecosystem were unpacked including climate change, biodiversity preservation, political climate, and the global kleptocracy, with attention to their implications for the health of the most marginalized populations.Despite the highlighted issues, there is, potentially, a panacea for these inequities and challenges. One speaker, Dr. Lisa Adams of Dartmouth College, proposed a paradigm shift that summarized a wide range of deliberations–“moving global health out of the realm of charity into global citizenship, security, human rights, equal partnership, and interdisciplinary collaboration between LMICs and HICs.” Moving forward, more deliberate effort should be given to some elements. First, rethinking governance and funding at a global level while promoting the autonomy of LMICs and conflict-affected settings to drive their health agenda–independent from HIC interests. Bringing the elephant into the room by making equal space for LMICs to set the agenda at global tables of discussion around funding, research, and development will be pivotal to dismantling neocolonialism. Furthermore, funders and partners should work with in-country systems in LMICs as opposed to bypassing them. This is essential to building resilient health systems unified at national levels to allow for cross-discipline collaborations and swift responses to health threats. Rwanda is a laudable example, having swiftly remodeled its existing health systems including routine electronic information systems for nationwide COVID-19 surveillance, testing, contact tracing, and vaccination. Second, investing time to build trusting relationships between researchers or implementers and policy makers by upholding a participatory approach to research and implementation of evidence-based practices. This is essential globally, to support development of global public goods such as vaccines, free from market dynamics and aimed at universal and equitable access. Third, introduce policies that engage economies to produce with less fragmentation of nature and reduced pollution. These include protected area management, financing of nature-positive projects, and conservationist work for natural capital preservation. Global and public health practitioners need to educate and empower citizens to choose healthy and ecologically sustainable consumption practices. Fourth, promoting development of novel technologies for preventing HIV infection, such as broadly neutralizing antibodies, could overturn the unequal burden of HIV in adolescents and young women in SSA. Finally, HIC have a lot they can learn from LMICs. COVID-19 evidently demonstrated that a country’s Global Health Security Index ranking is not necessarily commensurate to its degree of success in handling pandemics, among other public health threats [8,12,13].Throughout the conference, it was apparent that equity and collectivity in global health are necessary–not optional. Dr. Elvin Geng of Washington University, St. Louis remarked that the path to equity should be measurable with routinely incorporated metrics that track interventions to redress inequity and foster accountability. To achieve this, the tools of implementation science can be employed at both regional and global levels [14]. Overall, the remarkable interlacing of diverse disciplinary sessions at CUGH 2021 not only brought to light pressing world problems but equipped participants with a wellspring of potential remedies and collaborative opportunities. The panelists and speakers effectively portrayed the layered and multidimensional nature of global challenges underscoring the need for similarly multifaceted solutions. CUGH 2021 sparked thought-provoking discourse around global health strategies and re-invigorated the collective passion of global health experts, novices, and everyone in between, to build forward better.  相似文献   

5.
Kate Causey and Jonathan F Mosser discuss what can be learnt from the observed impacts of the COVID-19 pandemic on routine immunisation systems.

In the final months of 2021, deaths due to the Coronavirus Disease 2019 (COVID-19) surpassed 5 million globally [1]. Available data suggest that even this staggering figure may be a substantial underestimate of the true toll of the pandemic [2]. Beyond mortality, it may take years to fully quantify the direct and indirect impacts of the COVID-19 pandemic such as disruptions in preventive care services. In an accompanying research study in PLOS Medicine, McQuaid and colleagues report on the uptake of routine childhood immunizations in 2020 in Scotland and England during major pandemic-related lockdowns [3]. This adds to a growing body of literature quantifying the impact of the COVID-19 pandemic on routine health services and childhood immunization [4,5], which provides important opportunities to learn from early pandemic experiences as immunization systems face ongoing challenges.McQuaid and colleagues compared weekly or monthly data on vaccine uptake in Scotland and England from January to December of 2020 to the rates observed in 2019 to estimate the changes in uptake before, during, and after COVID-19 pandemic lockdowns in each country. The authors included 2 different preschool immunizations, each with multiple doses. They found significantly increased uptake within 4 weeks of eligibility during the lockdown and postlockdown periods in Scotland for all 5 vaccine dose combinations examined: During lockdown, percentage point increases ranged from 1.3% to 14.3%. In England, there were significant declines in uptake during the prelockdown, lockdown, and postlockdown periods for all 4 vaccine dose combinations examined. However, declines during lockdown were small, with percentage point decreases ranging from −0.5% to −2.1%. Due to the nature of the data available, the authors were unable to account for possible seasonal variation in vaccine delivery, control for important individual-level confounders or effect modifiers such as child sex and parental educational attainment, or directly compare outcomes across the 2 countries.These findings stand in contrast to the documented experience of many other countries, where available data suggest historic disruptions in routine childhood vaccination coverage, particularly during the first months of pandemic-related lockdowns [5,6]. Supply side limitations such as delayed shipments of vaccines and supplies [7], inadequate personal protective equipment [8], staff shortages [9], and delayed or canceled campaigns and introductions [9] threatened vaccine delivery. Furthermore, fear of exposure to COVID-19 at vaccination centers [10], misinformation about vaccine safety [8], and lockdown-related limitations on travel to facilities [9,10] reduced demand. In polls of country experts conducted by WHO, UNICEF, and Gavi, the Vaccine Alliance throughout the second quarter of 2020, 126 of 170 countries reported at least some disruption to routine immunization programs [10,11]. Global estimates suggest that millions more children missed doses of important vaccines than would have in the absence of the COVID-19 pandemic [5,6]. While many vaccine programs showed remarkable resilience in the second half of 2020, with rates of vaccination returning to or even exceeding prepandemic levels [5,6], disruptions to immunization services persisted into 2021 in many countries [12].As the authors discuss, it is critical to pinpoint the specific program policies and strategies that contributed to increased uptake in Scotland and only small declines in England and, more broadly, to the rapid recovery of vaccination rates observed in many other countries. McQuaid and colleagues cite work suggesting that increased flexibility in parental working patterns during lockdowns, providing mobile services or public transport to vaccine centers, and sending phone- and mail-based reminders are strategies that may have improved uptake of timely vaccination in Scotland during this period [13]. Similarly, immunization programs around the world have employed a broad range of strategies to maintain or increase vaccination during the pandemic. Leaders in Senegal, Paraguay, and Sri Lanka designed and conducted media campaigns to emphasize the importance of childhood immunization even during lockdown [8,14,15]. Although many programs delayed mass campaigns in the spring of 2020, multiple countries were able to implement campaigns by the summer of 2020 [8,1620]. In each of these examples, leaders responded quickly to meet the unique challenges presented by the COVID-19 pandemic in their communities.Increased data collection and tracking systems are essential for efficient and effective responses as delivery programs face challenges. When concern arose for pandemic-related disruptions to immunization services, public health decision-makers in Scotland and England responded by increasing the frequency and level of detail in reports of vaccine uptake and by making these data available for planning and research. The potential for robust data systems to inform real-time decision-making is not limited to high-income countries. For instance, the Nigerian National Health Management Information System released an extensive online dashboard shortly after the onset of the pandemic, documenting the impact of COVID-19 on dozens of indicators of health service uptake, including 16 related to immunization [21]. Vaccination data systems that track individual children and doses, such as the reminder system in Scotland, allow for highly targeted responses. Similarly, in Senegal, Ghana, and in Karachi, Pakistan, healthcare workers have relied on existing or newly implemented tracking systems to identify children who have missed doses and provide text message and/or phone call reminders [8,22,23]. Investing in robust routine data systems allows for rapid scale-up of data collection, targeted services to those who miss doses, and a more informed response when vaccine delivery challenges arise.Policy and program decision-makers must learn from the observed impacts of the COVID-19 pandemic on health systems and vaccine delivery. The study by McQuaid and colleagues provides further evidence that vaccination programs in England and Scotland leveraged existing strengths and identified novel strategies to mitigate disruptions and deliver vaccines in the early stages of the pandemic. However, the challenges posed by the pandemic to routine immunization services continue. To mitigate the risk of outbreaks of measles and other vaccine-preventable diseases, strategies are needed to maintain and increase coverage, while ensuring that children who missed vaccines during the pandemic are quickly caught up. The accompanying research study provides important insights into 2 countries where services were preserved—and even increased—in the early pandemic. To meet present and future challenges, we must learn from early pandemic successes such as those in Scotland and England, tailor solutions to improve vaccine uptake, and strengthen data systems to support improved decision-making.  相似文献   

6.
Peter Figueroa and co-authors advocate for equity in the worldwide provision of COVID-19 vaccines.

Many may not be aware of the full extent of global inequity in the rollout of Coronavirus Disease 2019 (COVID-19) vaccines in response to the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) pandemic. As of June 20, 2021, only 0.9% of those living in low-income countries and less than 10% of those in low- and middle-income countries (LMICs) had received at least 1 dose of a COVID-19 vaccine compared with 43% of the population living in high-income countries (HICs) [1] (Fig 1). Only 2.4% of the population of Africa had been vaccinated compared with 41% of North America and 38% of Europe [1,2] (S1 Fig). Primarily due to the inability to access COVID-19 vaccines, less than 10% of the population in as many as 85 LMICs had been vaccinated compared with over 60% of the population in 26 HICs [1]. Only 10 countries account for more than 75% of all COVID-19 vaccines administered [3]. This striking and ongoing inequity has occurred despite the explicit ethical principles affirming equity of access to COVID-19 vaccines articulated in WHO SAGE values framework [4,5] prepared in mid-2020, well prior to the availability of COVID-19 vaccines.Open in a separate windowFig 1Proportion of people vaccinated with at least 1 dose of COVID-19 vaccine by income (April 14 to June 23, 2021).Note: Data on China appeared on the database on June 9, hence the jump in upper middle-income countries. COVID-19, Coronavirus Disease 2019. Source: https://ourworldindata.org/covid-vaccinations.The COVID-19 pandemic highlights the grave inequity and inadequacy of the global preparedness and response to serious emerging infections. The establishment of the Coalition for Epidemic Preparedness Innovations (CEPI) in 2018, the Access to COVID-19 Tools Accelerator (ACT-A), and the COVID-19 Vaccines Global Access (COVAX) Facility in April 2020 and the rapid development of COVID-19 vaccines were all positive and extraordinary developments [6]. The COVAX Facility, as of June 2021, has delivered approximately 83 million vaccine doses to 75 countries, representing approximately 4% of the global supply, and one-fifth of this was for HICs [7]. The COVAX Facility has been challenged to meet its supply commitments to LMICs due to insufficient access to doses of COVID-19 vaccines with the prerequisite WHO emergency use listing (EUL) or, under exceptional circumstances, product approval by a stringent regulatory authority (SRA) [8,9]. Because of the anticipated insufficient COVID-19 vaccine supply through the COVAX Facility, the majority of nonvaccine-producing LMIC countries made the decision, early in the COVID-19 pandemic, to secure and use vaccines produced in China or Russia prior to receipt of WHO EUL or SRA approval. Most of the vaccines used in LMICs as of June 20, 2021 (nearly 1.5 billion doses of the 2.6 billion doses administered) were neither WHO EUL or SRA approved at the time they were given [10]. This may raise possible concerns with respect to the effectiveness, safety, and acceptability of individual vaccines used by many countries [8,9].  相似文献   

7.
8.
Olivia Oxlade and co-authors introduce a Collection on tuberculosis preventive therapy in people with HIV infection.

The most recent World Health Organization Global Tuberculosis (TB) Report suggests that 50% of people living with HIV (PLHIV) newly enrolled in HIV care initiated tuberculosis preventive treatment (TPT) in 2019 [1]. TPT is an essential intervention to prevent TB disease among people infected with Mycobacterium tuberculosis—some 25% of the world’s population [2]. Without TPT, it is estimated that up to 10% of individuals will progress to TB disease. Among PLHIV, the prognosis is worse. Of the approximately 1.4 million annual deaths from TB, 200,000 occur among PLHIV [1], who experience TB at rates more than 30 times [3] higher than people living without HIV.In 2018, governments at the United Nations High-Level Meeting (UNHLM) on TB committed to rapid expansion of testing for TB infection and provision of TPT [4]. The goal was the provision of TPT to at least 24 million household contacts of people with TB disease and 6 million PLHIV between 2018 and 2022. However, by the end of 2019, fewer than half a million household contacts had initiated TPT, well short of the pace needed to achieve the 5-year target [1]. On the other hand, approximately 5.3 million PLHIV have initiated TPT in the past 2 years [1], with particularly dramatic increases in countries supported by the President’s Emergency Plan for AIDS Relief (PEPFAR) [5]. Globally, among PLHIV entering HIV care programs, TPT initiation rose from 36% in 2017 to 49% in 2018 and 50% in 2019 [6,7].To provide insight into scaling up TPT for PLHIV, it is important to consider each of the many steps involved in the “cascade of care” for TPT. A previous systematic review of studies in several populations receiving TPT concluded that nearly 70% of all people who may benefit from TPT were lost to follow-up at cascade of care steps prior to treatment initiation [8]. To maximize the impact of TPT for TB prevention among PLHIV, the full TPT cascade of care must be assessed to identify problems and develop targeted solutions addressing barriers at each step. Until now, these data had not been synthesized for PLHIV.In order to address important research gaps related to TPT in PLHIV such as this one, we are now presenting a Collection in PLOS Medicine on TPT in PLHIV. In the first paper in this Collection, Bastos and colleagues performed a systematic review and meta-analysis of the TPT cascade of care in 71 cohorts with a total of 94,011 PLHIV [9]. This analysis highlights key steps in the cascade where substantial attrition occurs and identifies individual-level and programmatic barriers and facilitators at each step. In stratified analyses, they found that losses during the TPT cascade were not different in high-income compared to low- or middle-income settings, nor were losses greater in centers performing tests for TB infection (tuberculin skin test [TST] or interferon gamma release assay [IGRA]) prior to TPT initiation.The net benefits of TPT could potentially be increased through greater adoption of shorter rifamycin-based TPT regimens, for which there is increasing evidence of greater safety, improved treatment completion, and noninferior efficacy, compared to isoniazid regimens. Two reviews of rifamycin-based regimens in mostly HIV–negative adults and children concluded that they were as effective for prevention of TB as longer isoniazid-based regimens, with better treatment completion and fewer adverse events [10,11]. However, safety and tolerability of TPT regimens can differ substantially between people with and without HIV, and for rifamycin-based TPT regimens, safety outcomes were actually worse in people without HIV [12], plus there can be important drug–drug interactions between rifamycin-based regimens and antiretroviral drugs [13]. Reviews of studies focused on PLHIV concluded that TPT (regardless of regimen selected) significantly reduced TB incidence [14] and that the benefits of continuous isoniazid in high TB transmission settings outweighed the risks [15]. As part of this Collection, Yanes-Lane and colleagues conducted a systematic review and network meta-analysis of 16 randomized trials to directly and indirectly compare the risks and benefits of isoniazid and rifamycin-based TPT regimens among PLHIV [16]. Their findings highlight the better safety, improved completion, and evidence of efficacy, particularly reduced mortality, with rifamycin-based TPT regimens, while also noting improved TB prevention with extended duration mono-isoniazid regimens. Their review also revealed that few studies exist on some important at-risk populations, such was pregnant women and those with drug-resistant TB infection.In North America, recommendations changed in 2020 to favor shorter rifamycin-based regimens over isoniazid [17], but WHO still favors isoniazid [18], largely due to the lower drug costs. Although drug costs for rifamycins are typically higher than for isoniazid, their shorter duration and better safety profile mean that total costs for care (including personnel costs) may be lower for rifamycin-based regimens, even in underresourced settings [19]. The cost-effectiveness of different TPT regimens among PLHIV in underresourced settings remains uncertain, as well as the impact of antiretroviral therapy (ART), and the use of diagnostic tests for TB infection, such as TST or IGRA on cost efficiency. Uppal and colleagues, in the third paper in this Collection, performed a systematic review and meta-analysis of 61 published cost-effectiveness and transmission modeling studies of TPT among PLHIV [20]. In all studies, TPT was consistently cost-effective, if not cost saving, despite wide variation in key input parameters and settings considered.When comparing access to TPT among PLHIV to household contacts, many would consider the glass is half full, given that almost half of all PLHIV newly accessing care initiated TPT in 2018 and 2019, and the UNHLM goal of 6 million PLHIV initiating TPT was already nearly achieved by the end of 2020. This remarkable achievement is the result of strong recommendations from WHO for TPT among PLHIV for nearly a decade and strong donor support. These policies are, in turn, based on clear and consistent evidence of individual benefits from multiple randomized trials, plus consistent evidence of cost-effectiveness from many economic analyses as summarized in the papers in this Collection. These are useful lessons for scaling up TPT for other target populations, particularly household contacts, of whom less than half a million have initiated TPT, of the 24 million–person target set in 2018.However, the glass of TPT among PLHIV is also half empty. In contrast to the “90-90-90” targets, 50% of PLHIV newly enrolled in care do not initiate TPT, and PLHIV still bear a disproportionate burden of TB. Programmatic scale-up of TPT continues to encounter challenges that need to be overcome in order to translate individual-level success to population-level improvement. The study by Bastos and colleagues in this Collection has identified programmatic barriers including drug stockouts and suboptimal training for healthcare workers, but it also offers useful solutions, including integration of HIV and TPT services [9]. New evidence on the success of differentiated service delivery will also be invaluable to support programmatic scale-up in different settings [21]. Acting on this evidence will be essential to achieve the goal of full access to effective, safe, and cost-effective TPT for PLHIV.  相似文献   

9.
Céline Caillet and co-authors discuss a Collection on use of portable devices for the evaluation of medicine quality and legitimacy.

Summary points
  • Portable devices able to detect substandard and falsified medicines are vital innovations for enhancing the inspection of medicines in pharmaceutical supply chains and for timely action before they reach patients. Such devices exist, but there has been little to no independent scientific evidence of their accuracy and cost-effectiveness to guide regulatory authorities in choosing appropriate devices for their settings.
  • We tested 12 portable devices, evaluated their diagnostic performances and the resources required to use each device in a laboratory.
  • We then assessed the utility and usability of the devices in medicine inspectors’ hands in a pharmacy mimicking a real-life Lao pharmacy.
  • We then assessed the health and economic benefits of using portable devices compared to not using them in a low- to middle-income setting.
  • Here, we discuss the conclusions and practical implications of the multiphase study discussed in this Collection. We discuss the results, highlight the evidence gaps, and provide recommendations on the key aspects to consider in the implementation of portable devices and their main advantages and limitations.
Global concerns over the quality of medicines, especially in low- and middle-income countries (LMICs) are exacerbated by the Coronavirus Disease 2019 (COVID-19) pandemic [1,2]. The World Health Organisation (WHO) estimated that 10.5% of medicines in LMICs may be substandard or falsified (SF) [3]. “Prevention, detection, and response” to SF medical products are strategic priorities of WHO to contribute to effective and efficient regulatory systems [4]. Numerous portable medicine screening devices are available on the market, holding great hope for detection of SF medicines in an efficient and timely manner, and, therefore, might serve as key detection tools to inform prevention and response [5,6]. Screening devices have the potential to rapidly identify suspected SF medical products, giving more objective selection for reference assays, reducing the financial and technical burden. However, little is known regarding how well the existing devices fulfil their functions and how they could be deployed within risk-based postmarketing surveillance (rb-PMS) systems [57].We conducted, during 2016 to 2018, a collaborative multiphase exploratory study aimed at comparing portable screening devices. This paper accompanies 4 papers in this PLOS Collection “A multiphase evaluation of portable screening devices to assess medicines quality for national Medicines Regulatory Authorities.” The first article introduced the multiphase study [8]. In brief, 12 devices (S1 Table) were first evaluated in a laboratory setting [9], to select the most field-suitable devices for further evaluation of their utility/usability by Lao medicines inspectors [10]. Cost-effectiveness analysis of their implementation for rb-PMS in Laos was also conducted [11]. The results of these 3 phases were discussed in a multistakeholder meeting in 2018 in Vientiane, Lao PDR (S1 Text). The advantages/disadvantages, cost-effectiveness, and optimal use of screening devices in medicine supply chains were discussed to develop policy recommendations for medicines regulatory authorities (MRAs) and other institutions who wish to implement screening technologies. A summary of the main results of the multiphase study is presented in S2 Table.As far as we are aware, this is the first independent investigation comparing the accuracy and practical use from a public health perspective, of a diverse set of portable medicine quality screening devices. The specific objective(s) for which the portable screening technologies are implemented, their advantages/limitations, costs and logistics, and the development of detailed standard operating procedures and training programmes are key points to be carefully addressed when considering selection and deployment of screening technologies within specific rb-PMS systems (Fig 1).Open in a separate windowFig 1Major proposed considerations for the selection and implementation of medicine quality screening device.Each circle represents a key consideration when purchasing a screening device, grouped by themes (represented by heptagons). When the shapes overlap, the considerations are connected. For example, standard operating procedures are needed for the implementation of devices and should include measures for user safety. The circle diameters are illustrative.Here, we utilise this research and related literature to discuss the evidence, gaps, and recommendations, complementary to those recently published by the US Pharmacopeial Convention [12]. These discussions can inform policy makers, non-governmental organisations, wholesalers/distributors, and hospital pharmacies considering the implementation of such screening devices. We discuss unanswered research questions that require attention to ensure that the promise these devices hold is realised.  相似文献   

10.
Zulfiqar A. Bhutta discusses prevention and treatment strategies for optimization of community-based management of severe acute malnutrition in children.

In this issue of PLOS Medicine, Matt Hitchings and colleagues detail the findings from their prospective cluster-randomized crossover trial conducted across 10 health centers in Sokoto, Nigeria, to assess the nutritional recovery in children with uncomplicated severe acute malnutrition (SAM) receiving monthly follow-up compared to the standard weekly follow-up schedules [1]. In almost 4,000 children so allocated, the nutritional recovery at 3 months’ follow-up was lower in the monthly follow-up group (52.4%) compared to the standard weekly group (58.8%), with higher cumulative mortality at 3 months (8.5% versus 6.2% with the standard weekly follow-up). In contrast, rates of default and relapse were significantly lower among SAM children allocated to monthly follow-up. The authors, while urging caution in adopting a modified schedule of monthly follow-up visits in such children, also recognize the trade-off of simplicity and ease of operations in some settings where weekly follow-up visits are not feasible.Despite global progress in improving maternal and child undernutrition, the high burden of severe malnutrition persists. Recent estimates show a small reduction (from 15.9% to 14.2%) in wasting prevalence in low-income countries, and a slight increase (from 3.3% to 4.7%) in middle-income countries, although overall almost 50 million children aged under 5 years still remain wasted worldwide [2]. This burden of SAM has most likely been exacerbated during the recent Coronavirus Disease 2019 (COVID-19) pandemic, with an estimated additional 6.7 million children becoming wasted in 2020 [3].Within this large number of wasted children are those with SAM who are triaged to facility-based nutritional rehabilitation if seriously ill, or community-based treatment regimens if stable. The development of standardized management protocols for children with SAM with ready-to-use therapeutic foods (RUTFs) represents one of the greatest advances in treating such children at scale and reducing the mortality associated with the condition [4]. However, given the general context where childhood SAM clusters, such as those affected by extreme poverty, climate change, conflict, and involving displaced populations, major challenges remain in optimizing SAM management. These include relatively high rates of relapse [5], and associated residual mortality with severe malnutrition, often exceeding 10% in some settings [6]. Strategies are thus needed to optimize community case management aimed at simplifying the treatment regimen for SAM, reducing defaults and relapse rates among affected children.Such real-life evaluations of management strategies for severe malnutrition among at-risk children are few and far between, and most welcome. The global evidence base for the management of SAM in various settings is still mixed, with wide variations in recovery or relapse rates and mortality. This is especially the case in complex emergencies and conflict settings [7] with obvious limitations of human resources and commodities. The challenges of managing SAM in different contexts and settings are directly related to available nutrition rehabilitation commodities and trained human resources, as well as the ability of poor and food-insecure households to follow complex regimen and follow-up schedules. For many poor households with daily wage laborers or workers, taking a day off to travel to ambulatory care settings is a weekly financial and logistic hardship that may be impossible to bear. Alternative approaches with community outreach workers providing care and commodities in domiciliary settings has also met with mixed success, with lower rates of uptake in effectiveness settings with busy public-sector workers [8,9].There are additional research questions related to the nutritional rehabilitation and management of SAM including dosage schedules and protocols for administering RUTF in outreach and ambulatory programs. Additional therapeutic challenges in managing children with SAM include the limited repertoire of options for interventions in children under 6 months of age, as well as strategies to manage children with concurrent stunting and wasting [1,10]. While the recommendations for facility-based management of unstable children with SAM are well recognized [11], corresponding protocols for ambulatory management of severely malnourished children with suspected infections and at risk of adverse outcomes are still a subject of much debate [12].The gains from potentially simplifying ambulatory management strategies for SAM are considerable but must be weighed against the best-possible and cost-effective strategies. Of great priority are strategies that integrate SAM management in community settings with additional child health and development interventions [13]. Given the close correlation and relationship between various forms of malnutrition (moderate and severe acute malnutrition), there is growing interest in common management protocols and simplified regimens for preventing and managing all forms of acute malnutrition. The sizeable subgroup of children with concurrent wasting and stunting represents a subgroup at much greater risk of adverse outcomes and mortality [14] and needs strategies that also integrate maternal and early child health and nutrition strategies.There has been a healthy increase in research related to prevention and management strategies for SAM in recent years, all adding to the evidence base for effective implementation in field settings. Corresponding processes for guidelines development by WHO are understandably cautious, but it is worth noting that the guidelines for the management of SAM by WHO are now almost a decade old [15] and need updating as well as flexibility in implementation. Studies such as those by Hitchings and colleagues [1] should show the way to optimize the screening and management of SAM in settings with limited facilities and community capacity for weekly follow-up. The recognition that such infants may be at higher risk of relapse or mortality could well require additional contacts, such as fortnightly follow-up or outreach services, areas that should be studied in future evaluations.  相似文献   

11.
12.
In this Perspective, Fiona Bragg and Zhengming Chen discuss the burden of diabetes in the Chinese Population.

The worldwide epidemic of diabetes continues to grow [1]. In China, the rise in prevalence has been notably rapid; about 12% of the adult population has diabetes [2], accounting for almost one quarter of cases worldwide [1] and representing a 10-fold increase over the last 3 to 4 decades. It is appropriate, therefore, that diabetes—both prevention and management—is a major focus of current health policy initiatives in China [3,4], and their success depends on reliable quantification of the burden of diabetes. Commonly used measures such as prevalence and incidence fail to capture excess mortality risks or differences in life expectancy in diabetes [5]. Moreover, they may be less easily interpreted by policy makers and affected individuals. Estimates of lifetime risks and life years spent living with diabetes in an accompanying study by Luk and colleagues provide a valuable new perspective on the burden of diabetes in the Chinese population [6].The study used Hong Kong territory-wide electronic health records data for 2.6 million adults. Using a Markov chain model and Monte-Carlo simulations, Luk and colleagues estimated age- and sex-specific lifetime risks of diabetes (incorporating both clinically diagnosed and undiagnosed diabetes) and remaining life years spent with diabetes. Their findings showed a lifetime risk of 65.9% and 12.7 years of life living with diabetes for an average 20-year old with normoglycaemia. For an average 20-year old with prediabetes the corresponding estimates were 88.0% and 32.5 years, respectively. In other words, 6 out of 10 20-year olds with normoglycaemia and 9 out of 10 with prediabetes would be expected to develop diabetes in their lifetime. The estimated lifetime risks declined with increasing age and were higher among women than men at all ages, likely reflecting women’s higher life expectancy.These estimated lifetime risks are striking and concerning. Moreover, they are notably higher than western population estimates [710], including those considering both diagnosed and undiagnosed diabetes [9,10]. An Australian study estimated that 38% of 25-year olds would develop diabetes in their lifetime [10]. Another study in the Netherlands reported 31.3% and 74.0% probabilities of developing diabetes in the remaining lifetime for individuals aged 45 years without diabetes and with prediabetes, respectively [9]. Diabetes incidence and overall mortality influence population lifetime risks. Differences in the glycaemic indicators used to identify undiagnosed diabetes may have contributed to differences between studies in diabetes incidence. In the study by Luk and colleagues, a combination of fasting plasma glucose (FPG), HbA1c levels and oral glucose tolerance testing (OGTT) was used, while in the Australian [10] and the Netherlands [9] studies, they used FPG/OGTT and mainly FPG, respectively. However, it is unlikely these differences would fully account for the large disparities seen in lifetime risk. Similarly, differences between life expectancy in Hong Kong (84.8 years), Australia (83.4 years), and the Netherlands (82.2 years) are too small to explain the differences. Interestingly, the high lifetime risks observed in Hong Kong were more comparable to those in the Indian population, estimated at 55.5% and 64.6%, respectively, among 20-year-old men and women [11]. The typical type 2 diabetes (T2D) phenotype in these Asian populations may partly explain their higher estimated lifetime risks. More specifically, T2D in both Chinese and Indian populations is characterised by onset among younger and less adipose individuals than typically observed in western populations, exacerbated by rapid urbanisation and associated unhealthy lifestyles [12].However, aspects of Luk and colleagues’ study design may have overestimated lifetime diabetes risks. Chief among these is the data source used and associated selection bias. The Hong Kong Diabetes Surveillance Database includes only individuals who have ever had a plasma glucose or HbA1c measurement undertaken in a local health authority facility. Since measurement of glycaemic indicators is more likely among individuals at greater current or future risk of dysglycaemic states, this will have inflated estimates of lifetime risk and life years spent with diabetes. Although replication was undertaken by the study authors to address this bias in the smaller China Health and Retirement Longitudinal Survey (CHARLS) cohort, it does not fully allay these concerns, with modestly lower estimated lifetime diabetes risks in the CHARLS cohort, even after accounting for its higher mortality. A further limitation is their consideration of transition to dysglycaemic states as irreversible. Although data on long-term transition between glycaemic states are lacking, reversion from prediabetes (and less commonly diabetes) to normoglycaemia is well recognised, e.g., through lifestyle interventions [13].Large-scale population-based cohort studies could valuably address many of the limitations described [14]. Furthermore, lifetime risks are, by definition, population-based and represent the risk of an average person in the population, limiting their value for communicating long-term disease risks to specific individuals. However, the extensive phenotyping (e.g., adiposity) characteristic of many large contemporary cohorts [14] would facilitate incorporation of risk factors into lifetime risk estimates, enhancing their relevance to individuals. Previous studies have found greater lifetime risks of diabetes associated with adiposity [9,11], and this approach could be extended to incorporate other established, as well as more novel (e.g., genetic), risk factors. This is arguably of particular relevance to later-onset chronic conditions, such as T2D, in which changes in risk factors during middle age can influence lifetime risks. A valuable extension of Luk and colleagues’ study will be estimation of risk factor specific lifetime diabetes risks for the Chinese population.Importantly, the limitations described do not detract from the enormity and importance of the challenge diabetes poses for China, including Hong Kong, and the estimates presented by Luk and colleagues provide valuable impetus for action. The disease burden insights can inform treatment programmes and enhance understanding of current and future impacts of diabetes and associated complications on the healthcare system. Moreover, T2D is preventable, and arguably, the greatest value of these estimated lifetime risks is in highlighting the need for, and informing the planning and provision of, diabetes primary prevention programmes. This includes identification of high-risk individuals, such as those with prediabetes, who are most likely to benefit from prevention interventions. However, the magnitude of the estimated lifetime diabetes risks, including among the large proportion of the population in a normoglycaemic state, additionally demonstrates the need for population-level prevention approaches, including environmental, structural, and fiscal strategies. Without such actions, the individual and societal consequences of diabetes for present and future generations in Hong Kong, as well as mainland China, will be immense.  相似文献   

13.
In this Perspective, Shivani Misra and Jose C Florez discuss the application of precision medicine tools in under-represented populations.

People of South Asian ancestry carry a 3-fold higher risk of developing type 2 diabetes (T2D) than white European individuals [1], with the disease typically manifesting a decade earlier [2] and at a leaner body mass index (BMI) [3]. The South Asian population is often considered as a uniform group, but significant heterogeneity in the prevalence of T2D and its phenotype manifestations across south Asia exists, with a higher prevalence in those from Bangladeshi and Pakistani communities [4]. Genome-wide association studies (GWAS) have not fully explained the excess risk observed in South Asian individuals [5,6], and attention has turned to strategies through which genetic information may be leveraged for clinical benefit, such as generating an aggregate of weighted single nucleotide polymorphisms (SNPs) that capture the overall genetic burden for a trait into a polygenic score (PS) (sometimes described as a polygenic risk score) [7]. However, constructing a PS remains challenging in populations that are underrepresented in GWAS.In the accompanying article in PLOS Medicine [8], Hodgson and colleagues investigate the use of a PS to predict T2D in the Genes & Health (G&H) cohort, addressing a key knowledge gap in the applicability of such tools in underrepresented ethnicities. G&H is a pioneering community-based cohort of approximately 48,000 participants of predominantly British Bangladeshi and Pakistani heritage combining genetic and longitudinal electronic healthcare record data. They first assessed the transferability of known T2D genetic risk loci in G&H and constructed a PS using variants from a multi-ancestry GWAS, adjusting the scores for Pakistani and Bangladeshi individuals and selecting the one with the highest odds for prediction. This score was then integrated with 3 versions of a clinical model (QDiabetes) to predict T2D onset over 10 years in 13,642 individuals diabetes free at baseline. The authors show that incorporation of a PS with QDiabetes provided better discrimination of progression to T2D, especially in those developing T2D under 40 years of age and in women with a history of gestational diabetes. Finally, they incorporated the PS into cluster analyses of baseline routine clinical characteristics, replicating clusters defined in European populations and identifying a cluster resembling a subgroup of severe insulin deficiency. This study significantly advances the field on the transferability of PSs, reproducibility of T2D clusters, and clinical translation of these findings to precision medicine for diabetes.  相似文献   

14.
Jean Adams discusses the evidence around food marketing restrictions and how they may be an effective way to support public health.

We live in a world increasingly saturated with marketing for less healthy foods [1]. One study found that children in New Zealand see an average of 27 instances of marketing for less healthy foods and only 12 for healthier foods, each day [2]. Food marketing involves activities across the 4 Ps of the marketing mix: product, place, price, and promotion. We are encouraged to buy less healthy food products through their placement in prominent store locations such as checkouts, end of aisles, and store entrances; price discounts; and promotions including advertising, cartoon tie-ins, and celebrity endorsements.Systematic reviews have confirmed the effectiveness of these marketing techniques to influence purchasing and consumption of less healthy foods [35]. Indeed, the documented power of food marketing has led the World Health Organisation to recommended limiting exposure as an overarching and enabling “best buy” to improve diets [6].Supermarkets remain the location of about 70% of food spend in the United Kingdom [7]. The concentration of food marketing in grocery stores can feel particularly overwhelming with parents describing the “temptation” as “like a trip to the zoo every week” for their children [8]. As such, supermarkets may be particularly important venues for addressing food marketing.In 2 accompanying Research Articles in PLOS Medicine, Piernas and colleagues used nonrandomised approaches to study the impacts on sales of a range of strategies to rebalance the marketing of healthier versus less healthy products in 3 large UK supermarket chains [9,10]. Across the 2 papers, 7 different interventions were implemented that changed the relative availability of healthier versus less healthy products (2 interventions), removed less healthy products from prominent positions, placed healthier products at eye level, offered price discounts on healthier products, increased signage on healthier products, and applied a range of entertainment tie-in promotions on healthier products (one intervention each). These variously had the intention to encourage substitution of less healthy products with healthier alternatives or to reduce purchasing of less healthy foods without substitution.Increasing the relative availability of healthier products, removing less healthy products from prominent positions and price promotions on healthier products were all associated with changes in unit sales in the expected direction, although associations with changes in nutrients purchased were sometimes more modest. In contrast, moving healthier products to eye level and increasing signage were not associated with changes in sales. These findings are particularly timely in England where a range of measures to reduce exposure to marketing of less healthy foods in retail environments are due to be implemented from October 2022 [11].Piernas and colleagues worked in collaboration with large UK supermarket chains. That the chains were prepared to innovate to support public health indicates that rebalancing marketing towards healthier products may not be as burdensome to the sector as it has sometimes claimed [12]. It also strengthens the external validity of these studies giving an indication of how customers react in real-world environments.However, that the supermarket chains decided what the interventions should be also imposes limitations on wider interpretation of the findings. Each of the 7 different interventions applied to different categories of foods without any rationale made explicit to the research team—for example, chocolate confectionary was removed from prominent positions, higher fibre breakfast cereals were placed at eye level, and price discounts were applied to fruit and vegetables. This makes it hard to determine whether observed impacts were unique to specific combinations of intervention and food category. Indeed, rather than particular marketing interventions being more effective than others across the board, it is possible that complex interplays between food category, marketing intervention, and other contextual aspects (such as shop and customer characteristics) interact to produce changes in sales.The “squeezed balloon effect” proposes that restrictions on specific aspects of marketing may lead to compensatory increases in others [13]. For example, restricting television advertising of less healthy foods during and around children’s programmes in the UK was associated with increased exposure of adults to these adverts [14]. Wider compensation between, as well as within, media (for example, TV restrictions leading to more online marketing) may also be expected. It is possible that supermarkets willing to engage in university-assessed marketing changes may have self-policed any simultaneous compensatory activities, and, anyway, these would not necessarily have been identified in the studies by Piernas and colleagues. Any real-life compensation as the whole grocery sector adapts to government-imposed marketing restrictions may be difficult to predict. This reinforces the need for postimplementation evaluation.The squeezed balloon effect means that the most effective marketing restrictions may be those that target marketing of the same products through multiple simultaneous interventions. In Chile, near-simultaneous implementation of front-of-pack warning labels, advertising restrictions, and a prohibition of sales in schools of products high in calories, sodium, sugar, or saturated fat were associated with substantial declines in purchases of targeted foods and nutrients [15]. This approach is also the underlying strategy in England where near-simultaneous restrictions on TV and online advertising of less healthy foods are planned for the whole of the UK alongside the England-specific bans on location and price-based promotions [16].Despite the innovative approach in England, neither the regulations on TV and online advertising of less healthy foods nor on price and location-based promotions of these foods have cleared the parliamentary process. The UK government recently accepted an amendment to the TV and online advertising restrictions to give the Secretary of State for Health and Social Care power to delay implementation [17]. The restrictions on price and location-based promotions may be under threat of being dropped altogether [18].Piernas and colleagues’ studies add to the accumulating evidence that restricting marketing on less healthy foods and encouraging marketing on healthier foods may be an effective way to support public health. Theory and a range of evidence suggest that simultaneous restrictions on a variety of different types of less healthy food marketing are likely to be the most effective ways of reducing exposure to this marketing. The UK government has proposed this approach in England on a number of occasions. That implementation continues to hang in the balance is a sad indictment of our collective inability to create a world that supports everyone to eat in the way they want to, rather than the way the marketers want for us.  相似文献   

15.
In this Perspective, Dimitrios Sagris, Stephanie Harrison, and Gregory Lip discuss new evidence concerning the paradoxical relationship between circulating lipids and atrial fibrillation.

Although the prevalence of cardiovascular comorbidities and associated risk factors such as diabetes mellitus, chronic kidney disease, and obesity increase with age, lipid levels may go through several changes over the course of a lifetime, associated with sex, ethnicity, and metabolic profile [1]. The association between lipid levels and atherosclerotic disease is well established [2], but the association between lipid levels and incidence of atrial fibrillation (AF) has not been fully elucidated.Despite the association between high lipoprotein levels and the increased risk of atherosclerosis and coronary artery disease (CAD), which, in turn, may lead to an increased risk of AF [3], several studies have suggested that high levels of low-density lipoprotein cholesterol (LDL-C), total cholesterol (TC), and high-density lipoprotein cholesterol (HDL-C) are associated with a lower risk of AF [4]. The clinical significance and pathophysiological mechanisms of this paradoxical inverse association between lipid levels and AF risk remain unclear [4].In an accompanying study in PLOS Medicine, Mozhu Ding and colleagues conducted a large population-based study of >65,000 adults aged 45 to 60 years without any history of cardiovascular disease, using data from the Swedish National Patient Register and Cause of Death Register [5]. Using International Classification of Diseases (ICD) codes from discharge diagnoses of hospital visits and causes of death captured in these registries, participants were followed up for up to 35 years for incident AF. Higher levels of TC and LDL-C were associated with lower risk of AF within the first 5 years (hazard ratios [HRs]: 0.61, 95% confidence intervals [CIs]: 0.41 to –0.99; HR: 0.64, 95% CI: 0.45 to 0.92), but the effect was attenuated after 5 years of follow-up. Conversely, lower levels of HDL-C, high triglyceride (TG) levels and high TG/HDL-C ratio were consistently associated with a higher risk of AF over 3 decades of follow-up (HRs ranging from 1.13 [95% CI: 1.07 to 1.19, p < 0.001] to 1.53 [95% CI: 1.12 to 2.00]).Previous longitudinal studies have demonstrated that levels of the majority of lipoproteins increase significantly between the ages of 20 to 50 years before plateauing in older age; this pattern is observed mainly in men, while in women, increasing lipoproteins are associated with menopause [6]. These early findings were recently confirmed in a cross-sectional study in a Chinese population, in which TC and LDL-C were found to plateau between the ages of 40 and 60 in men and 60 years of age in women, before declining markedly [7]. Considering the inverse association of TC and LDL-C with aging, and the association between aging and higher prevalence of AF, this may partially explain the inverse association of TC and LDL-C with AF.Another interesting finding reported by Mozhu Ding and colleagues is the inverse association of HDL-C levels with AF incidence and the association of high TG levels and high TG/HDL-C ratio with increased risk of AF [8]. This association remained consistent for more than 10 years, suggesting a potentially strong association of HDL-C and TG with incident AF. Since HDL-C and high TG levels are important components of metabolic syndrome, this finding may demonstrate a role of metabolic syndrome and its components in the risk of AF. A recent meta-analysis of 6 cohort studies, including 30,810,460 patients, showed that metabolic syndrome and low HDL-C were associated with a significantly higher risk of AF (HR: 1.57; 95% CI: 1.40 to 1.77, and HR: 1.18; 95% CI: 1.06 to 1.32, respectively) [9]. Based on this evidence, we can speculate that a combination of low HDL-C and high non-HDL-C (i.e., TC excluding HDL-C) may have a potential association with AF risk. Nonetheless, a nationwide cross-sectional survey suggested that non-HDL-C may also be associated with a lower risk of AF [10].The associations observed in the study by Mozhu Ding and colleagues may have been related to lipid treatment therapies, but as the authors highlight, lipid-lowering medicines were uncommonly used in the first few years of the study period and it is unlikely that these could have influenced the associations observed at the beginning of the follow-up period.The association of lipid levels with incident AF remained consistent both in patients with and without heart failure (HF) or CAD. However, in the sensitivity analysis including only patients for whom data on use of medications were available, among those with HF or CAD being treated with lipid-lowering medication, the risk of AF was lower compared to those who were not treated [8]. It seems that although lipid levels are correlated with incident AF irrespective of the presence of HF or CAD, in this high-risk population, the use of lipid-lowering medication reduces the risk of AF, as was previously suggested [11].Mozhu Ding and colleagues have been able to conduct a large-scale study with a long follow-up period, and the findings agree with previous observational evidence. As with all observational studies, residual confounding may be present. In this study, baseline lipid levels were assessed, but variability over time was not examined. A previous nationwide study in Korea has suggested high variability in lipid levels is associated with a higher risk of AF development [12]. Additionally, smoking or physical activity, which represent important cardiovascular parameters, were not accounted for, which may have partially influenced the observed associations.Although the natural progression of vascular aging, chronic inflammation, and dynamic changes in cardiovascular risk factors, including dyslipidemia, may play an essential role in cardiovascular diseases and the risk of AF [13,14], the exact mechanisms of the potential inverse correlation of hyperlipidemia to AF remain elusive. The accompanying study of Mozhu Ding and colleagues supports the existing evidence on the paradoxical inverse correlation of TC, LDL-C, and HDL-C levels with the risk of future AF, providing further insights in the role of TG levels and their correlation to HDL-C levels. New insights may improve understanding of the pathophysiology behind this paradoxical observation [15]. Until then, hyperlipidemia should be assessed as part of the overall cardiovascular risk [16], and the AF paradox should not outweigh this risk.  相似文献   

16.
Julie Bines discusses an accompanying study by Sheila Isanaka and colleagues on nutrient supplementation and immune responses to rotavirus vaccination.

The introduction of rotavirus vaccines into the national immunization programs globally has made a major impact on diarrhea-related hospitalizations and deaths. By 2020, 107 countries had introduced rotavirus vaccines, either nationally or regionally, including 52 countries in Africa and Asia eligible for funding through the Global Alliance for Vaccines and Immunization (Gavi) [1]. This represents a major step toward reducing under 5-year child mortality, the impact of rotavirus disease on child health, and the economic burden on families and the healthcare system. A remaining challenge is the lower vaccine protective efficacy observed in children in low- and middle-income countries (LMICs) where the mortality and hospitalizations due to severe rotavirus disease still occur [1]. The role of nutrition in influencing the immune response to a rotavirus vaccine is the focus of the accompanying paper by Isanaka and colleagues published in this issue of PLOS Medicine [2].Understanding why over 87% of children vaccinated with a rotavirus vaccine in low child mortality countries are protected against severe rotavirus disease compared to approximately 44% (27% to 59%) of children in high child mortality countries is not well understood [3]. As an orally administered vaccine, initial focus has been on factors that could neutralize the live vaccine virus within the gut lumen. Most rotavirus vaccines are administered in a buffered formulation to reduce the risk of neutralization of the vaccine virus by gastric acid [4]. In early clinical trials, fasting prior to vaccination was applied in an effort to reduce the potential impact of breast milk antibodies. This is now not considered necessary [5]. A difference in the gut microbiome in infants from high-income and LMICs has been observed, although the administration of a probiotic did not result in improved rotavirus vaccine immunogenicity [6,7]. Rotaviruses use histo-blood group antigens present on the gut epithelial surface in the initial phase of virus attachment and cellular entry [8]. It has been proposed that population variability in histo-blood group antigen phenotype, specifically Lewis and secretor status, may explain the genotype diversity of rotavirus between regions and the responses observed to live oral rotavirus vaccines that may be VP4 [P] genotype dependent [8].Childhood malnutrition is associated with reduced immune responses to a range of infections, and in the immune response to vaccines, including rotavirus vaccines [9]. Macro and/or micronutrient deficiencies have been linked to a range of abnormalities in T and B cell function, mucosal immunity, cytokine production, and responses [9]. However, there are limited data on the impact of maternal nutritional supplements on the immune responses following vaccination of their infants. Isanaka and colleagues’ cluster-randomized study was nested within a double-blind, placebo control vaccine efficacy trial. It evaluated the effect of 3 different maternal nutritional supplements on serum antirotavirus immunoglobulin A (IgA) seroconversion following administration of 3 doses of the oral rotavirus vaccine Rotasiil (G1, G2, G3, G4, G9) in infants in Niger [2]. The daily supplements were commenced prior to 30 week’s gestation in a population of women at risk of macro- and micro-nutrient malnutrition, although maternal anthropometry and micronutrient status before and after supplementation is not reported. The supplement options included the “standard of care” iron−folic acid (IFA) supplement, a multi-micronutrient (MMN) supplement at levels at or double the US recommended dietary allowance for pregnant women, or the same MMN supplement with an additional energy and protein component. As all groups received a supplement, this study was designed to provide a comparison between supplement groups rather than a comparison with no supplement. Across all supplement groups, the serum antirotavirus IgA seroconversion following administration of 3 doses of Rotasiil was modest at 39.6% and only 10% greater than that observed in the placebo group (29.0%). The rate of seroconversion did not differ between supplement groups, although serum IgA geometric mean titres were not reported. In similar study in The Gambia, an enhanced antibody response to the diphtheria-tetanus-pertussis (DPT) vaccine was observed in the infants of mothers who had received a prenatal MMN and protein−energy supplement, when compared to those who received the “standard of care” iron−folate supplement [10]. Of note, the supplement used in The Gambia study contained more energy and protein when compared to the MMN plus energy and protein used in this study in Niger (The Gambia study versus Niger study; energy [kcal]: 746 versus 237; protein [grams]: 20.8 versus 5.2; lipids [grams]: 52.6 versus 20). Whether the differences reported in vaccine immune responses between these 2 studies reflect these differences in the composition of the supplement, differences specific to the vaccine (DPT versus rotavirus vaccine), study sample size or characteristics of the study population requires further study.Rotavirus vaccines save lives and prevent hospitalizations due to rotavirus disease in children. Efforts to improve the level of protection provided by rotavirus vaccines, particularly in LMICs, have the potential to maximize the impact on these vaccines on global child health. Improving the nutritional status of infants through the provision of macro- and micro-nutrient supplements to pregnant mothers in high-risk populations may optimize immune responses to rotavirus vaccines; however, the specific composition of the prenatal supplement requires further investigation.  相似文献   

17.
18.
19.
Dr. Caitlin Moyer discusses the implications, for women globally, of restricting access to abortion care.

In late June, the landmark Roe v. Wade ruling was overturned by the United States Supreme Court, a decision, decried by human rights experts at the United Nations [1], that leaves many women and girls without the right to obtain abortion care that was established nearly 50 years ago. The consequences of limited or nonextant access to safe abortion services in the US remain to be seen; however, information gleaned from abortion-related policies worldwide provides insight into the likely health effects of this abrupt reversal in abortion policy. The US Supreme Court’s decision should serve to amplify the global call for strategies to mitigate the inevitable repercussions for women’s health.Upholding reproductive rights is crucial for the health of women and girls worldwide, and access to a safe abortion is central to this, yet policies in several countries either severely limit or actively prevent access to appropriate abortion care and services [2]. However, there is little to suggest that those countries and jurisdictions with abortion bans or heavily restrictive laws see fewer abortions performed. According to a modeling study of pregnancy intentions and abortion from the 1990s to 2019, rates of unintended pregnancies ending in abortion are broadly similar regardless of a country’s legal status of abortion, and unintended pregnancy rates are higher among countries with abortion restrictions [3]. Abortion is widely considered to be a low-risk procedure. Abortion-related deaths most likely occur in the context of unsafe abortion practices and are reported to account for 8% (95% UI 4.7–13.2%) of maternal deaths [4], making them a top direct contributor to maternal deaths globally, alongside hemorrhage, hypertension, and sepsis. Restrictive abortion policies may not lower the overall rates of abortion, but they can drive increasing rates of unsafe abortions, as women resort to seeking abortions covertly. Such abortions are often performed by untrained practitioners or involve harmful methods. Perhaps unsurprisingly, most abortions that take place in countries with restrictive abortion access policies are not considered safe [5], potentially contributing to maternal morbidity and mortality. A study of 162 countries found that maternal mortality rates are lower in countries with more flexible abortion access laws [6], suggesting that changes in abortion policies could have grievous implications for maternal deaths.It is not yet known if the reneging of federal protection of abortion rights will impact maternal deaths in the US; however, in the years following the 1973 Roe v. Wade decision, numbers of reported deaths associated with illegal abortions, defined as those performed by an unlicensed practitioner, declined, hovering between zero and 2 deaths from the 1980s to 2018, down from 35 in 1972 [7] and 19 reported in 1973 [8]. It is possible that limits on access to timely and safe abortion care could drive this number back up and add to the already unacceptably high maternal mortality rate in the US, potentially exacerbating the persistent disparities in maternal mortality based on socioeconomic deprivation, race and ethnicity, and other factors [9].Legal and social barriers that impede access to safe abortions are detrimental to the health and survival of women and girls; thus, constructing policies ensuring access to safe abortion services should be an urgent priority. Placing undue hurdles between women and access to abortion care is associated with undesirable health outcomes. For example, a 2011 change to medication abortion laws in one US state that involved increased medication costs and restricted the timing and location where abortion services could be provided was associated with an increase in rates of women requiring additional medical interventions [10]. Lending international weight to this argument, dissolution of barriers to safe abortion access was emphasized in the March 2022 update of WHO guidance on abortion care [11], echoing a 2018 comment on the International Covenant on Civil and Political Rights released by the United Nations Human Rights Committee [12] that called on member states to remove existing barriers and not enact new restrictions on provision of safe abortion services so that pregnant women and girls do not need to turn to unsafe abortions.In jurisdictions where prohibitive policies exist, more could be done to counter the impacts of new barriers by changing how abortion care is delivered and increasing accessibility. Protocols for the safe self-management of abortion can be implemented alongside provision of information and provider support. WHO guidance [11] suggests expanding the breadth of practitioners authorized to prescribe medical abortions to include nurses, midwives, and other cadres of healthcare workers. The guidelines also mention telemedicine as an approach to circumvent obstacles to seeking safe abortion services [11]. For those with access to the necessary technology, telemedicine services together with self-management of medication abortion can overcome travel-related barriers and ensure the privacy of those seeking treatment. Demands for telehealth services increased during the COVID-19 pandemic, and, according to one study, remote provision of abortion services in the US may be a promising option to counteract barriers and facilitate access [13].In 2022, restrictive policies or outright bans on abortion services are discriminatory against women, obstructing their right to maintain autonomy over their own sexual and reproductive health. A post-Roe legal landscape that renders abortion more difficult or impossible to obtain safely will exacerbate an increasingly bleak picture of maternal health in the US; however, the US is just one example where increased effort is needed to overcome barriers to improving women’s healthcare. The reality is that such barriers continue to represent a threat to the health of women worldwide. Evidence-based changes to policy and practice that break down barriers and build new roads are required to enable women to access the healthcare they need.  相似文献   

20.
A cesarean section (CS) can be a lifesaving intervention when medically indicated, but it may also lead to adverse short- and long-term health effects for women and children.

Marleen Temmerman and Abdu Mohiddin discuss the accompanying study by Enny Paixao and colleagues on associations between cesarean section birth and child mortality up to age 5.

Therefore, the accompanying research study by Paixao and colleagues published in PLOS Medicine, looking at CS and associated child mortality in Brazil, provides further valuable evidence on the balance of benefits and risks [1].CS rates are rising worldwide: Boerma and colleagues, on the basis of data from 169 countries including 98.4% of the world’s births, estimated that 29.7 million (21.1%) births occurred by CS in 2015, almost double the number of CS births in 2000 (16.0 million, 12.1%) [2]. In a further study investigating CS rates in specific obstetric populations using the Robson system, which classifies all deliveries into one of 10 groups on the basis of 5 parameters: obstetric history, onset of labour, foetal lie, number of neonates, and gestational age, there was an increase of CS across most Robson groups, especially after induction of labour in multiparous women [3].WHO guidance is clear that CS is essential for those who need it, specifying a recommended rate of 10% to 15% to improve maternal and perinatal outcomes and prevent maternal and neonatal mortality and morbidity [4]. Yet, given the increasing use of CS, particularly without medical indication, a more complete understanding of its health effects on women and children has become crucial. The maternal sequelae of CS are well described, while long-term consequences for child health require more research. There is emerging evidence that babies born by CS have different hormonal, physical, bacterial, and medical exposures and that these exposures can subtly alter neonatal physiology. Short-term risks (within 3 years) of CS can include altered immune development; an increased likelihood of allergy, atopy, and asthma; and reduced intestinal gut microbiome diversity [5]. In a systematic review, CS was found to be a risk factor for respiratory tract infections (pooled odds ratio (OR) = 1.30 for asthma) as well as for obesity (pooled OR = 1.35) in children [6]. In a further study including 327,272 neonates born by vaginal delivery and 55,246 by elective CS investigating neonatal respiratory morbidity in relation to mode of delivery, there was a 95% higher risk in neonates delivered by elective CS than in neonates born by spontaneous vaginal delivery [7]. Further, Alterman and colleagues described a moderately elevated risk of severe lower respiratory tract infections during infancy in infants born by planned CS, as compared to those born vaginally [8]. Infants born by planned or emergency CS may also be at a small increased risk of severe upper respiratory tract infections, with a stronger estimated effect if including the indirect effect arising from planning the cesarean birth for an earlier point in gestation than would have occurred spontaneously [8].However, the extent to which CS, in particular nonmedically indicated CS, benefits or reduces child survival remains unclear. Therefore, Paixao and colleagues conducted a population-based cohort study in Brazil by linking routine data on live births from 2012 to 2018 and assessing mortality up to 5 years of age [1]. Women with a live birth were classified into a Robson group based on pregnancy and delivery characteristics. The analysis of 17,838,115 live births showed that live births to women with low expected frequencies of CS (Robson groups 1 to 4) had a higher death rate up to age 5 years compared with vaginal deliveries (HR = 1.25, 95% CI: 1.22 to 1.28; p < 0.001). This means that CS was associated with a 25% increase in child mortality in infants born via CS in Robson groups with low expected frequencies of CS (i.e., low-risk mothers). In groups with high expected frequencies of CS (i.e., high-risk mothers), mortality rates were lower among infants born via CS, supporting the benefits of clinically indicated CS.This large study shows how important it is to optimise the use of CS, which is increasingly overused leading to global concern. Underuse of CS leads to maternal and perinatal mortality and morbidity, and yet, conversely, overuse of CS has not shown benefits and can create harm. As the frequency of CS continues to increase, interventions to reduce unnecessary CS are urgently needed. As described by Betrán and colleagues, many factors can affect rates of CS, and these may be associated with women, families, health professionals, and healthcare organisations and systems, being influenced by behavioural, psychosocial, health system, and financial factors [9]. These authors concluded that interventions to reduce overuse of CS must be multicomponent and locally tailored, addressing women’s and health professionals’ concerns, as well as reflecting health system and financial factors [9].Paixao and colleagues’ study provides evidence that either overuse or underuse of CS is associated with child survival, and the findings will help pregnant women and their providers to make informed decisions as to whether CS is appropriate for them. The authors should be commended for carrying out this big data record linkage study, which paves the way for further analyses to study risk profiles using other available population-level data. At a health policy level, the paper shows the significant challenge to child population health that the sequelae of low-risk CS pose, especially in countries with high CS rates such as Brazil at 56% [10]. This represents an avoidable threat to some of the gains to child mortality and morbidity seen over the past few decades and to the achievement of the UN’s Sustainable Development Goal 3 to ensure health and well-being at all ages [11]. Policymakers and civil society groups should take note and act by implementing the recommendations of the 2018 International Federation of Gynaecology and Obstetrics (FIGO) position paper, calling for “joint actions with health professionals, governmental bodies, women’s groups and the healthcare insurance industry to stop unnecessary caesarean sections” [12].  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号