首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.

Objectives

To evaluate the feasibility and effectiveness of dried blood spots (DBS) use for viral load (VL) monitoring, describing patient outcomes and programmatic challenges that are relevant for DBS implementation in sub-Saharan Africa.

Methods

We recruited adult antiretroviral therapy (ART) patients from five district hospitals in Malawi. Eligibility reflected anticipated Ministry of Health VL monitoring criteria. Testing was conducted at a central laboratory. Virological failure was defined as >5000 copies/ml. Primary outcomes were program feasibility (timely result availability and patient receipt) and effectiveness (second-line therapy initiation).

Results

We enrolled 1,498 participants; 5.9% were failing at baseline. Median time from enrollment to receipt of results was 42 days; 79.6% of participants received results within 3 months. Among participants with confirmed elevated VL, 92.6% initiated second-line therapy; 90.7% were switched within 365 days of VL testing. Nearly one-third (30.8%) of participants with elevated baseline VL had suppressed (<5,000 copies/ml) on confirmatory testing. Median period between enrollment and specimen testing was 23 days. Adjusting for relevant covariates, participants on ART >4 years were more likely to be failing than participants on therapy 1–4 years (RR 1.7, 95% CI 1.0-2.8); older participants were less likely to be failing (RR 0.95, 95% CI 0.92-0.98). There was no difference in likelihood of failure based on clinical symptoms (RR 1.17, 95% CI 0.65-2.11).

Conclusions

DBS for VL monitoring is feasible and effective in real-world clinical settings. Centralized DBS testing may increase access to VL monitoring in remote settings. Programmatic outcomes are encouraging, especially proportion of eligible participants switched to second-line therapy.  相似文献   

3.
Major depression is often a relapsing disorder. It is therefore important to start its treatment with therapies that maximize the chance of not only getting the patients well but also keeping them well. We examined the associations between initial treatments and sustained response by conducting a network meta‐analysis of randomized controlled trials (RCTs) in which adult patients with major depression were randomized to acute treatment with a psychotherapy (PSY), a protocolized antidepressant pharmacotherapy (PHA), their combination (COM), standard treatment in primary or secondary care (STD), or pill placebo, and were then followed up through a maintenance phase. By design, acute phase treatment could be continued into the maintenance phase, switched to another treatment or followed by discretionary treatment. We included 81 RCTs, with 13,722 participants. Sustained response was defined as responding to the acute treatment and subsequently having no depressive relapse through the maintenance phase (mean duration: 42.2±16.2 weeks, range 24‐104 weeks). We extracted the data reported at the time point closest to 12 months. COM resulted in more sustained response than PHA, both when these treatments were continued into the maintenance phase (OR=2.52, 95% CI: 1.66‐3.85) and when they were followed by discretionary treatment (OR=1.80, 95% CI: 1.21‐2.67). The same applied to COM in comparison with STD (OR=2.90, 95% CI: 1.68‐5.01 when COM was continued into the maintenance phase; OR=1.97, 95% CI: 1.51‐2.58 when COM was followed by discretionary treatment). PSY also kept the patients well more often than PHA, both when these treatments were continued into the maintenance phase (OR=1.53, 95% CI: 1.00‐2.35) and when they were followed by discretionary treatment (OR=1.66, 95% CI: 1.13‐2.44). The same applied to PSY compared with STD (OR=1.76, 95% CI: 0.97‐3.21 when PSY was continued into the maintenance phase; OR=1.83, 95% CI: 1.20‐2.78 when PSY was followed by discretionary treatment). Given the average sustained response rate of 29% on STD, the advantages of PSY or COM over PHA or STD translated into risk differences ranging from 12 to 16 percentage points. We conclude that PSY and COM have more enduring effects than PHA. Clinical guidelines on the initial treatment choice for depression may need to be updated accordingly.  相似文献   

4.
The validity and clinical utility of the concept of “clinical high risk” (CHR) for psychosis have so far been investigated only in risk‐enriched samples in clinical settings. In this population‐based prospective study, we aimed – for the first time – to assess the incidence rate of clinical psychosis and es­timate the population attributable fraction (PAF) of that incidence for preceding psychosis risk states and DSM‐IV diagnoses of non‐psychotic mental disorders (mood disorders, anxiety disorders, alcohol use disorders, and drug use disorders). All analyses were adjusted for age, gender and education. The incidence rate of clinical psychosis was 63.0 per 100,000 person‐years. The mutually‐adjusted Cox proportional hazards model indicated that preceding diagnoses of mood disorders (hazard ratio, HR=10.67, 95% CI: 3.12‐36.49), psychosis high‐risk state (HR=7.86, 95% CI: 2.76‐22.42) and drug use disorders (HR=5.33, 95% CI: 1.61‐17.64) were associated with an increased risk for clinical psychosis incidence. Of the clinical psychosis incidence in the population, 85.5% (95% CI: 64.6‐94.1) was attributable to prior psychopathology, with mood disorders (PAF=66.2, 95% CI: 33.4‐82.9), psychosis high‐risk state (PAF=36.9, 95% CI: 11.3‐55.1), and drug use disorders (PAF=18.7, 95% CI: –0.9 to 34.6) as the most important factors. Although the psychosis high‐risk state displayed a high relative risk for clinical psychosis outcome even after adjusting for other psychopathology, the PAF was comparatively low, given the low prevalence of psychosis high‐risk states in the population. These findings provide empirical evidence for the “prevention paradox” of targeted CHR early intervention. A comprehensive prevention strategy with a focus on broader psychopathology may be more effective than the current psychosis‐focused approach for achieving population‐based improvements in prevention of psychotic disorders.  相似文献   

5.
6.

Background

In Bihar, India, high maternal anemia prevalence and low iron and folic acid supplement (IFA) receipt and consumption have continued over time despite universal IFA distribution and counseling during pregnancy.

Purpose

To examine individual and facility-level determinants of IFA receipt and consumption among pregnant women in rural Bihar, India.

Methods

Using District Level Household Survey (2007–08) data, multilevel modeling was conducted to examine the determinants of two outcomes: IFA receipt (any IFA receipt vs. none) and IFA consumption (≥90 days vs. <90 days). Individual-level and facility-level factors were included. Factor analysis was utilized to construct antenatal care (ANC) quality and health sub-center (HSC) capacity variables.

Results

Overall, 37% of women received any IFA during their last pregnancy. Of those, 24% consumed IFA for 90 or more days. Women were more likely to receive any IFA when they received additional ANC services and counseling, and attended ANC earlier and more frequently. Significant interactions were found between ANC quality factors (odds ratio (OR): 0.37, 95% confidence interval (CI): 0.25, 0.56) and between ANC services and ANC timing and frequency (OR: 0.68, 95% CI: 0.56, 0.82). No HSC factors were significantly associated with IFA receipt. Women were more likely to consume IFA for ≥90 days if they attended at least 4 ANC check-ups and received more ANC services. IFA supply at the HSC (OR: 1.37, 95% CI: 1.04, 1.82) was also significantly associated with IFA consumption.

Conclusions

Our findings indicate that individual and ANC factors (timing, frequency, and quality) play a key role in facilitating IFA receipt and consumption. Although HSC capacity factors were not found to influence our outcomes, significant variation at the facility level indicates unmeasured factors that could be important to address in future interventions.  相似文献   

7.
The goal of this study was to evaluate changes in plasma human immunodeficiency virus (HIV) RNA concentration [viral load (VL)] and CD4+ percentage (CD4%) during 6-12 weeks postpartum (PP) among HIV-infected women and to assess differences according to the reason for receipt of antiretrovirals (ARVs) during pregnancy [prophylaxis (PR) vs. treatment (TR)]. Data from a prospective cohort of HIV-infected pregnant women (National Institute of Child Health and Human Development International Site Development Initiative Perinatal Study) were analyzed. Women experiencing their first pregnancy who received ARVs for PR (started during pregnancy, stopped PP) or for TR (initiated prior to pregnancy and/or continued PP) were included and were followed PP. Increases in plasma VL (> 0.5 log10) and decreases in CD4% (> 20% relative decrease in CD4%) between hospital discharge (HD) and PP were assessed. Of the 1,229 women enrolled, 1,119 met the inclusion criteria (PR: 601; TR: 518). At enrollment, 87% were asymptomatic. The median CD4% values were: HD [34% (PR); 25% (TR)] and PP [29% (PR); 24% (TR)]. The VL increases were 60% (PR) and 19% (TR) (p < 0.0001). The CD4% decreases were 36% (PR) and 18% (TR) (p < 0.0001). Women receiving PR were more likely to exhibit an increase in VL [adjusted odds ratio (AOR) 7.7 (95% CI: 5.5-10.9) and a CD4% decrease (AOR 2.3; 95% CI: 1.6-3.2). Women receiving PR are more likely to have VL increases and CD4% decreases compared to those receiving TR. The clinical implications of these VL and CD4% changes remain to be explored.  相似文献   

8.

Background

A large-scale prevalence survey of blindness and visual impairment (The Andhra Pradesh Eye Diseases Study [APEDS1]) was conducted between 1996-2000 on 10,293 individuals of all ages in three rural and one urban clusters in Andhra Pradesh, Southern India. More than a decade later (June 2009-March 2010), APEDS1 participants in rural clusters were traced (termed APEDS2) to determine ocular risk factors for mortality in this longitudinal cohort.

Methods and Findings

Mortality hazard ratio (HR) analysis was performed for those aged >30 years at APEDS1, using Cox proportional hazard regression models to identify associations between ocular exposures and risk of mortality. Blindness and visual impairment (VI) were defined using Indian definitions. 799/4,188 (19.1%) participants had died and 308 (7.3%) had migrated. Mortality was higher in males than females (p<0.001). In multivariable analysis, after adjusting for age, gender, diabetes, hypertension, body mass index, smoking and education status the mortality HR was 1.9 (95% CI: 1.5-2.5) for blindness; 1.4 (95% CI: 1.2-1.7) for VI; 1.8 (95% CI: 1.4-2.3) for pure nuclear cataract, 1.5 (95% CI: 1.1-2.1) for pure cortical cataract; 1.96 (95% CI: 1.6-2.4) for mixed cataract, 2.0 (95% CI: 1.4-2.9) for history of cataract surgery, and 1.58 (95% CI: 1.3-1.9) for any cataract. When all these factors were included in the model, the HRs were attenuated, being 1.5 (95% CI: 1.1-2.0) for blindness and 1.2 (95% CI: 0.9-1.5) for VI. For lens type, the HRs were as follows: pure nuclear cataract, 1.6 (95% CI: 1.3-2.1); pure cortical cataract, 1.5 (95% CI: 1.1-2.1); mixed cataract, 1.8 (95% CI: 1.4-2.2), and history of previous cataract surgery, 1.8 (95% CI: 1.3-2.6).

Conclusions

All types of cataract, history of cataract surgery and VI had an increased risk of mortality that further suggests that these could be potential markers of ageing.  相似文献   

9.
BackgroundThe importance of patient-reported outcome measurement in chronic kidney disease (CKD) populations has been established. However, there remains a lack of research that has synthesised data around CKD-specific symptom and health-related quality of life (HRQOL) burden globally, to inform focused measurement of the most relevant patient-important information in a way that minimises patient burden. The aim of this review was to synthesise symptom prevalence/severity and HRQOL data across the following CKD clinical groups globally: (1) stage 1–5 and not on renal replacement therapy (RRT), (2) receiving dialysis, or (3) in receipt of a kidney transplant.Methods and findingsMEDLINE, PsycINFO, and CINAHL were searched for English-language cross-sectional/longitudinal studies reporting prevalence and/or severity of symptoms and/or HRQOL in CKD, published between January 2000 and September 2021, including adult patients with CKD, and measuring symptom prevalence/severity and/or HRQOL using a patient-reported outcome measure (PROM). Random effects meta-analyses were used to pool data, stratified by CKD group: not on RRT, receiving dialysis, or in receipt of a kidney transplant. Methodological quality of included studies was assessed using the Joanna Briggs Institute Critical Appraisal Checklist for Studies Reporting Prevalence Data, and an exploration of publication bias performed. The search identified 1,529 studies, of which 449, with 199,147 participants from 62 countries, were included in the analysis. Studies used 67 different symptom and HRQOL outcome measures, which provided data on 68 reported symptoms. Random effects meta-analyses highlighted the considerable symptom and HRQOL burden associated with CKD, with fatigue particularly prevalent, both in patients not on RRT (14 studies, 4,139 participants: 70%, 95% CI 60%–79%) and those receiving dialysis (21 studies, 2,943 participants: 70%, 95% CI 64%–76%). A number of symptoms were significantly (p < 0.05 after adjustment for multiple testing) less prevalent and/or less severe within the post-transplantation population, which may suggest attribution to CKD (fatigue, depression, itching, poor mobility, poor sleep, and dry mouth). Quality of life was commonly lower in patients on dialysis (36-Item Short Form Health Survey [SF-36] Mental Component Summary [MCS] 45.7 [95% CI 45.5–45.8]; SF-36 Physical Component Summary [PCS] 35.5 [95% CI 35.3–35.6]; 91 studies, 32,105 participants for MCS and PCS) than in other CKD populations (patients not on RRT: SF-36 MCS 66.6 [95% CI 66.5–66.6], p = 0.002; PCS 66.3 [95% CI 66.2–66.4], p = 0.002; 39 studies, 24,600 participants; transplant: MCS 50.0 [95% CI 49.9–50.1], p = 0.002; PCS 48.0 [95% CI 47.9–48.1], p = 0.002; 39 studies, 9,664 participants). Limitations of the analysis are the relatively few studies contributing to symptom severity estimates and inconsistent use of PROMs (different measures and time points) across the included literature, which hindered interpretation.ConclusionsThe main findings highlight the considerable symptom and HRQOL burden associated with CKD. The synthesis provides a detailed overview of the symptom/HRQOL profile across clinical groups, which may support healthcare professionals when discussing, measuring, and managing the potential treatment burden associated with CKD.Protocol registrationPROSPERO CRD42020164737.

In a systematic review and meta analysis, Benjamin R. Fletcher and colleagues study patient-reported symptom prevalence, severity, and health related quality of life among individuals with different stages of chronic kidney disease in 62 countries.  相似文献   

10.
Sputum cultures are an important tool in monitoring the response to tuberculosis treatment, especially in multidrug-resistant tuberculosis. There has, however, been little study of the effect of treatment regimen composition on culture conversion. Well-designed clinical trials of new anti-tuberculosis drugs require this information to establish optimized background regimens for comparison. We conducted a retrospective cohort study to assess whether the use of an aggressive multidrug-resistant tuberculosis regimen was associated with more rapid sputum culture conversion. We conducted Cox proportional-hazards analyses to examine the relationship between receipt of an aggressive regimen for the 14 prior consecutive days and sputum culture conversion. Sputum culture conversion was achieved in 519 (87.7%) of the 592 patients studied. Among patients who had sputum culture conversion, the median time to conversion was 59 days (IQR: 31–92). In 480 patients (92.5% of those with conversion), conversion occurred within the first six months of treatment. Exposure to an aggressive regimen was independently associated with sputum culture conversion during the first six months of treatment (HR: 1.36; 95% CI: 1.10, 1.69). Infection with human immunodeficiency virus (HR 3.36; 95% CI: 1.47, 7.72) and receiving less exposure to tuberculosis treatment prior to the individualized multidrug-resistant tuberculosis regimen (HR: 1.58; 95% CI: 1.28, 1.95) were also independently positively associated with conversion. Tachycardia (HR: 0.77; 95% CI: 0.61, 0.98) and respiratory difficulty (HR: 0.78; 95% CI: 0.62, 0.97) were independently associated with a lower rate of conversion. This study is the first demonstrating that the composition of the multidrug-resistant tuberculosis treatment regimen influences the time to culture conversion. These results support the use of an aggressive regimen as the optimized background regimen in trials of new anti-TB drugs.  相似文献   

11.
There is a lack of information about the seroepidemiology of T. gondii infection in the general population of Durango City, Mexico. Anti- Toxoplasma gondii IgG and IgM antibodies were sought in 974 inhabitants in Durango City, Mexico with the use of enzyme-linked immunoassays. in total, 59 (6.1%) of 974 participants (mean age 37 ± 16.1 yr) had IgG anti- T. gondii antibodies. Twenty (2.1%) of them also had IgM anti- T. gondii antibodies. IgG levels of 13-99, 100-150, and >150 International Units (IU)/ml were found in 14 (23.7%), 3 (5.1%), and 42 (71.2%) anti- T. gondii IgG-positive participants, respectively. Prevalence of infection increased with age (P < 0.05), and was significantly lower in participants born in Durango State than those born in other Mexican states (P < 0.01). Toxoplasma gondii infection was significantly associated with consumption of boar meat (adjusted odds ratio [OR] = 3.02; 95% confidence interval [CI]: 1.49-6.13), and squirrel meat (adjusted OR = 2.18; 95% CI: 1.17-4.09). in addition, infection was negatively associated with travel abroad (adjusted OR = 0.42; 95% CI: 0.23-0.77), and salami consumption (adjusted OR = 0.57; 95% CI: 0.32-0.99). This is the first report of seroprevalence and contributing factors for T. gondii infection in general population in Durango City, and of an association of the consumption of boar meat with T. gondii infection. This study provides a basis for the design of successful preventive measures against T. gondii infection.  相似文献   

12.
Chronic kidney disease (CKD) is an important cause of morbidity and mortality in HIV-positive individuals. Hepatitis C (HCV) co-infection has been associated with increased risk of CKD, but prior studies lack information on potential mechanisms. We evaluated the association between HCV or hepatitis B (HBV) co-infection and progressive CKD among 3,441 antiretroviral-treated clinical trial participants. Progressive CKD was defined as the composite of end-stage renal disease, renal death, or significant glomerular filtration rate (eGFR) decline (25% decline to eGFR <60 mL/min/1.73 m(2) or 25% decline with a baseline <60). Generalized Estimating Equations were used to model the odds of progressive CKD. At baseline, 13.8% and 3.3% of participants were co-infected with HCV and HBV, respectively. Median eGFR was 111, and 3.7% developed progressive CKD. After adjustment, the odds of progressive CKD were increased in participants with HCV (OR 1.72, 95% CI 1.07-2.76) or HBV (OR 2.26, 95% CI 1.15-4.44). Participants with undetectable or low HCV-RNA had similar odds of progressive CKD as HCV seronegative participants, while participants with HCV-RNA >800,000 IU/ml had increased odds (OR 3.07; 95% CI 1.60-5.90). Interleukin-6, hyaluronic acid, and the FIB-4 hepatic fibrosis index were higher among participants who developed progressive CKD, but were no longer associated with progressive CKD after adjustment. Future studies should validate the relationship between HCV viremia and CKD. TRIAL REGISTRATION: ClinicalTrials.gov NCT00027352; NCT00004978.  相似文献   

13.
Jing JJ  Li M  Yuan Y 《Gene》2012,497(2):237-242
Toll-like receptor 4 (TLR4) is critical in the recognition of Gram-negative bacteria serving as a key immune system effector. Recently, a number of case-control studies were conducted to investigate the association between TLR4 gene polymorphism and cancer risk, especially Asp299Gly and Thr399Ile polymorphisms. However, published data were still conflicting. In this paper, we summarized 9463 cancer cases and 10,825 controls from 22 studies and attempted to assess the susceptibility of TLR4 gene polymorphism to cancers by a synthetical meta-analysis. Odds ratios (ORs) with 95% confidence intervals (CIs) were estimated to assess the relationship. Our results suggested that Asp299Gly represented a risk factor on cancers in digestive system (G allele versus A allele, OR=1.64, 95% CI: 1.02-2.64; GA+GG versus AA, OR=1.64, 95% CI: 1.00-2.71) but tend to have a protective effect on prostate cancer (GG versus AA, OR=0.37, 95% CI: 0.14-0.98; GG versus GA+AA, OR=0.37, 95% CI: 0.14-0.98). Thr399Ile polymorphism was significantly associated with an elevated cancer risk in overall analysis (T allele versus C allele, OR=1.72, 95% CI: 1.27-2.33; TC versus CC, OR=1.63, 95% CI: 1.18-2.26; TT+TC versus CC, OR=1.70, 95% CI: 1.24-2.34) and especially in gastrointestinal subgroup (T allele versus C allele, OR=2.01, 95% CI: 1.40-2.89; TC versus CC, OR=1.86, 95% CI: 1.26-2.74; TT+TC versus CC, OR=1.97, 95% CI: 1.35-2.88). Further prospective researches with larger numbers of worldwide participants are warranted to draw comprehensive and true conclusions.  相似文献   

14.
BackgroundHepatocellular carcinoma (HCC) is one of the leading causes of cancer-related deaths in the United States (US), with substantial disparities observed in cancer incidence and survival among racial groups. This study provides analyses on race and ethnicity disparities for patients with HCC.MethodsThis is a cross-sectional analysis of data from the National Inpatient Sample (NIS) between 2011 and 2016, utilizing the STROBE guidelines. Multivariate logistic regression analyses were used to examine the risk-adjusted associations between race and pre-treatment clinical presentation, surgical procedure allocation, and post-treatment hospital outcomes. All clinical parameters were identified using ICD-9-CM and ICD-10-CM diagnosis and procedure codes.Results83,876 weighted HCC hospitalizations were reported during the study period. Patient demographics were divided according to NIS racial/ethnic categorization, which includes Caucasian (57.3%), African American (16.9%), Hispanic (15.7%), Asian or Pacific Islanders (9.3%), and Native American (0.8%). Association between greater odds of hospitalization and Elixhauser Comorbidity Index > 4 was significantly higher among Native Americans (aOR=1.79; 95% CI: 1.23–2.73), African Americans (aOR=1.24; 95% CI: 1.12–1.38), and Hispanics (aOR=1.11; 95% CI, 1.01–1.24). Risk-adjusted association between race and receipt of surgical procedures demonstrated that the odds of having surgery was significantly lower for African Americans (aOR=0.64; 95% CI: 0.55–0.73) and Hispanics (aOR=0.70; 95% CI: 0.59–0.82), while significantly higher for Asians/Pacific Islanders (aOR=1.36; 95% CI: 1.28–1.63). Post-operative complications were significantly lower for African Americans (aOR=0.68; 95% CI: 0.55–0.86) while the odds of in-hospital mortality were significantly higher for African Americans (aOR=1.28; 95% CI: 1.11–1.49) and Asians/Pacific Islanders (aOR=1.26; 95% CI: 1.13–1.62).ConclusionsAfter controlling for potential confounders, there were significant racial disparities in pre-treatment presentations, surgical procedure allocations, and post-treatment outcomes among patients with HCC. Further studies are needed to determine the underlying factors for these disparities to develop targeted interventions to reduce these disparities of care.  相似文献   

15.
PurposeTo investigate the associations of time spent sedentary, in moderate-to-vigorous-intensity physical activity (MVPA) and physical activity energy expenditure (PAEE) with physical capability measures at age 60-64 years.MethodsTime spent sedentary and in MVPA and, PAEE were assessed using individually calibrated combined heart rate and movement sensing among 1727 participants from the MRC National Survey of Health and Development in England, Scotland and Wales as part of a detailed clinical assessment undertaken in 2006-2010. Multivariable linear regression models were used to examine the cross-sectional associations between standardised measures of each of these behavioural variables with grip strength, chair rise and timed up-&-go (TUG) speed and standing balance time.ResultsGreater time spent in MVPA was associated with higher levels of physical capability; adjusted mean differences in each capability measure per 1standard deviation increase in MVPA time were: grip strength (0.477 kg, 95% confidence interval (CI): 0.015 to 0.939), chair rise speed (0.429 stands/min, 95% CI: 0.093 to 0.764), standing balance time (0.028 s, 95% CI: 0.003 to 0.053) and TUG speed (0.019 m/s, 95% CI: 0.011 to 0.026). In contrast, time spent sedentary was associated with lower grip strength (-0.540 kg, 95% CI: -1.013 to -0.066) and TUG speed (-0.011 m/s, 95% CI: -0.019 to -0.004). Associations for PAEE were similar to those for MVPA.ConclusionHigher levels of MVPA and overall physical activity (PAEE) are associated with greater levels of physical capability whereas time spent sedentary is associated with lower levels of capability. Future intervention studies in older adults should focus on both the promotion of physical activity and reduction in time spent sedentary.  相似文献   

16.
Exposure to adverse childhood experiences (ACEs), including maltreatment and family dysfunction, is a major contributor to the global burden of disease and disability. With a large body of international literature on ACEs having emerged over the past 25 years, it is timely to now synthetize the available evidence to estimate the global prevalence of ACEs and, through a series of moderator analyses, determine which populations are at higher risk. We searched studies published between January 1, 1998 and August 5, 2021 in Medline, PsycINFO and Embase. Study inclusion criteria were using the 8- or 10-item ACE Questionnaire (±2 items), reporting the prevalence of ACEs in population samples of adults, and being published in English. The review protocol was registered with PROSPERO (CRD42022348429). In total, 206 studies (208 sample estimates) from 22 countries, with 546,458 adult participants, were included. The pooled prevalence of the five levels of ACEs was: 39.9% (95% CI: 29.8-49.2) for no ACE; 22.4% (95% CI: 14.1-30.6) for one ACE; 13.0% (95% CI: 6.5-19.8) for two ACEs; 8.7% (95% CI: 3.4-14.5) for three ACEs, and 16.1% (95% CI: 8.9-23.5) for four or more ACEs. In subsequent moderation analyses, there was strong evidence that the prevalence of 4+ ACEs was higher in populations with a history of a mental health condition (47.5%; 95% CI: 34.4-60.7) and with substance abuse or addiction (55.2%; 95% CI: 45.5-64.8), as well as in individuals from low-income households (40.5%; 95% CI: 32.9-48.4) and unhoused individuals (59.7%; 95% CI: 56.8-62.4). There was also good evidence that the prevalence of 4+ ACEs was larger in minoritized racial/ethnic groups, particularly when comparing study estimates in populations identifying as Indigenous/Native American (40.8%; 95% CI: 23.1-59.8) to those identifying as White (12.1%; 95% CI: 10.2-14.2) and Asian (5.6%; 95% CI: 2.4-10.2). Thus, ACEs are common in the general population, but there are disparities in their prevalence. They are among the principal antecedent threats to individual well-being and, as such, constitute a pressing social issue globally. Both prevention strategies and downstream interventions are needed to reduce the prevalence and mitigate the severity of the effects of ACEs and thereby reduce their deleterious health consequences on future generations.  相似文献   

17.

Background

This study explored the relationship between the glycated hemoglobin (HbA1c) level in patients with or without diabetes mellitus and future risks of cardiovascular disease and death.

Methods

Based on a national representative cohort, a total of 5277 participants (7% with diabetes) were selected from Taiwan''s Triple High Survey in 2002. The comorbidities, medication usages, and outcomes of cardiovascular disease and death, were extracted from the Taiwan’s National Health Insurance Research Database and National Death Registry.

Results

After a median follow-up of 9.7 years, participants with diabetes had higher incidence of new onset cardiovascular disease (17.9 versus 3.16 cases per 1000 person-years) and death (20.1 versus 4.96 cases per 1000 person-years) than those without diabetes (all P < 0.001). Diabetes showed increased risk of all-cause death after adjusting for all confounders (adjusted hazard ratio [HR]: 2.29, 95% confidence interval [CI]: 1.52-3.45). Every 1% increment of HbA1c was positively associated with the risk of total cardiovascular disease (HR: 1.2, 95% CI: 1.08-1.34) and the risk of death (HR: 1.14, 95% CI: 1.03-1.26) for all participants. As compared to the reference group with HbA1c below 5.5%, participants with HbA1c levels ≥7.5% had significantly elevated future risks of total cardiovascular disease (HR: 1.82, 95% CI: 1.01-3.26) and all-cause death (HR: 2.45, 95% CI: 1.45-4.14).

Conclusions/Interpretation

Elevated HbA1C levels were associated with increased risks of cardiovascular disease and death, the suboptimal glycemic control with HbA1c level over 7.5% (58.5 mmol/mol) was strongly associated with increased risks of cardiovascular disease and all-cause death.  相似文献   

18.

Background

Small body size at birth is associated with an increased risk of cardiovascular disease and type 2 diabetes. Dietary habits are tightly linked with these disorders, but the association between body size at birth and adult diet has been little studied. We examined the association between body size at birth and intake of foods and macronutrients in adulthood.

Methodology/Principal Findings

We studied 1797 participants, aged 56 to 70, of the Helsinki Birth Cohort Study, whose birth weight and length were recorded. Preterm births were excluded. During a clinical study, diet was assessed with a validated food-frequency questionnaire. A linear regression model adjusted for potential confounders was used to assess the associations. Intake of fruits and berries was 13.26 g (95% confidence interval [CI]: 0.56, 25.96) higher per 1 kg/m3 increase in ponderal index (PI) at birth, and 83.16 g (95% CI: 17.76, 148.56) higher per 1 kg higher birth weight. One unit higher PI at birth was associated with 0.14% of energy (E%) lower intake of fat (95% CI: -0.26, -0.03) and 0.18 E% higher intake of carbohydrates (95% CI: 0.04, 0.32) as well as 0.08 E% higher sucrose (95% CI: 0.00, 0.15), 0.05 E% higher fructose (95% CI: 0.01, 0.09), and 0.18 g higher fiber (95% CI: 0.02, 0.34) intake in adulthood. Similar associations were observed between birth weight and macronutrient intake.

Conclusions

Prenatal growth may modify later life food and macronutrient intake. Altered dietary habits could potentially explain an increased risk of chronic disease in individuals born with small body size.  相似文献   

19.
BackgroundPrior studies have reported higher HIV prevalence among prisoners than the general population in Brazil, but data have been derived from single prisons. The aim of this study was to evaluate HIV testing practices, prevalence and linkage to care among inmates in a network of 12 prisons.MethodsWe administered a questionnaire to a population-based sample of inmates from 12 prisons in Central-West Brazil and collected sera for HIV and syphilis testing from January to December 2013. We evaluated factors associated with HIV testing and infection using multivariable logistic regression models. Six months after HIV testing, we assessed whether each HIV-infected prisoner was engaged in clinical care and whether they had started antiretroviral therapy.ResultsWe recruited 3,362 inmates, of whom 2,843 (85%) were men from 8 prisons, and 519 (15%) were women from 4 prisons. Forty-five percent of participants reported never having been tested for HIV previously. In multivariable analysis, the variables associated with previous HIV testing were lack of a stable partner (adjusted odds ratio [AOR]: 1.38; 95% CI: 1.18–1.60), completed more than four years of schooling (AOR 1.40; 95% CI: 1.20–1.64), history of previous incarceration (AOR: 1.68; 95% CI: 1.43–1.98), history of mental illness (AOR 1.52; 95% CI: 1.31–1.78) and previous surgery (AOR 1.31; 95% CI: 1.12–1.52). Fifty-four (1.6%) of all participants tested positive for HIV; this included 44 (1.54%) men and 10 (1.92%) women. Among male inmates, HIV infection was associated with homosexuality (AOR 6.20, 95% CI: 1.73–22.22), self-report of mental illness (AOR 2.18, 95% CI: 1.13–4.18), history of sexually transmitted infections (AOR 3.28, 95% CI: 1.64–6.56), and syphilis sero-positivity (AOR 2.54, 95% CI: 1.20–5.39). Among HIV-infected individuals, 34 (63%) were unaware of their HIV status; only 23 of these 34 (68%) newly diagnosed participants could be reached at six month follow-up, and 21 of 23 (91%) were engaged in HIV care.ConclusionsHIV testing rates among prison inmates are low, and the majority of HIV-infected inmates were unaware of their HIV diagnosis. Incarceration can be an opportunity for diagnosis and treatment of HIV among vulnerable populations who have poor access to health services, but further work is needed on transitional HIV care for released inmates.  相似文献   

20.
The aim of our study was to compare apolipoprotein B (apoB), non-high density lipoprotein cholesterol (nonHDL-C), low density lipoprotein cholesterol (LDL-C), and other lipid markers as predictors of coronary heart disease (CHD) in Chinese. Overall, 122 individuals developed CHD during a median 13.6 years of follow-up in 3,568 adult participants from a community-based cohort. The multivariate relative risk of CHD in the highest quintile compared with the lowest quintile was 2.74 [95% confidence interval (CI), 1.45-5.19] for apoB, 1.98 (95% CI, 1.00-3.92) for nonHDL-C, and 1.86 (95% CI, 1.00-3.49) for LDL-C (all tests for trend, P < 0.05). ApoB also had the highest receiver operator characteristic curve area (0.63; 95% CI, 0.58-0.68) in predicting CHD. When apoB and nonHDL-C were mutually adjusted, only apoB was predictive; the relative risk was 2.80 (95% CI, 1.31-5.96; P = 0.001) compared with 1.09 (95% CI, 0.49-2.40; P = 0.75) for nonHDL-C. Compared with the lowest risk, participants with the highest apoB and total cholesterol/HDL-C had a 3-fold increased risk of developing CHD (relative risk = 3.21; 95% CI, 1.45-7.14). These data provide strong evidence that apoB concentration was a better predictor of CHD than other lipid markers in Chinese.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号