首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.

Background

HIV infection is a major contributor to maternal mortality in resource-limited settings. The Drug Resource Enhancement Against AIDS and Malnutrition Programme has been promoting HAART use during pregnancy and postpartum for Prevention-of-mother-to-child-HIV transmission (PMTCT) irrespective of maternal CD4 cell counts since 2002.

Methods

Records for all HIV+ pregnancies followed in Mozambique and Malawi from 6/2002 to 6/2010 were reviewed. The cohort was comprised by pregnancies where women were referred for PMTCT and started HAART during prenatal care (n = 8172, group 1) and pregnancies where women were referred on established HAART (n = 1978, group 2).

Results

10,150 pregnancies were followed. Median (IQR) baseline values were age 26 years (IQR:23–30), CD4 count 392 cells/mm3 (IQR:258–563), Viral Load log10 3.9 (IQR:3.2–4.4), BMI 23.4 (IQR:21.5–25.7), Hemoglobin 10.0 (IQR: 9.0–11.0). 101 maternal deaths (0.99%) occurred during pregnancy to 6 weeks postpartum: 87 (1.1%) in group 1 and 14 (0.7%) in group 2. Mortality was 1.3% in women with <than 350 CD4 cells/mm3 and 0.7% in women with greater than 350 CD4s cells/mm3 [OR = 1.9 (CL 1.3–2.9) p = 0.001]. Mortality was higher in patients with shorter antenatal HAART: 22/991 (2.2%) if less than 30 days and 79/9159 (0.9%) if 31 days or greater [OR = 2.6 (CL 1.6–4.2) p<0.001]. By multivariate analysis, shorter antenatal HAART (p<0.001), baseline values for CD4 cell count (p = 0.012), hemoglobin (p = 0.02), and BMI (p<0.001) were associated with mortality. Four years later, survival was 92% for women with shorter antenatal HAART and 98% for women on established therapy prior to pregnancy, p = 0.001.

Conclusions

Antiretrovirals for PMTCT purposes have significant impact on maternal mortality as do CD4 counts and nutritional status. In resource-limited settings, PMTCT programs should provide universal HAART to all HIV+ pregnant women given its impact in prevention of maternal death.  相似文献   

2.

Background

The accurate diagnosis of TB in HIV-infected patients, particularly with advanced immunosuppression, is difficult. Recent studies indicate that a lipoarabinomannan (LAM) assay (Clearview-TB®-ELISA) may have some utility for the diagnosis of TB in HIV-infected patients; however, the precise subgroup that may benefit from this technology requires clarification. The utility of LAM in sputum samples has, hitherto, not been evaluated.

Methods

LAM was measured in sputum and urine samples obtained from 500 consecutively recruited ambulant patients, with suspected TB, from 2 primary care clinics in South Africa. Culture positivity for M. tuberculosis was used as the reference standard for TB diagnosis.

Results

Of 440 evaluable patients 120/387 (31%) were HIV-infected. Urine-LAM positivity was associated with HIV positivity (p = 0.007) and test sensitivity, although low, was significantly higher in HIV-infected compared to uninfected patients (21% versus 6%; p<0.001), and also in HIV-infected participants with a CD4 <200 versus >200 cells/mm3 (37% versus 0%; p = 0.003). Urine-LAM remained highly specific in all 3 subgroups (95%–100%). 25% of smear-negative but culture-positive HIV-infected patients with a CD4 <200 cells/mm3 were positive for urine-LAM. Sputum-LAM had good sensitivity (86%) but poor specificity (15%) likely due to test cross-reactivity with several mouth-residing organisms including actinomycetes and nocardia species.

Conclusions

These preliminary data indicate that in a high burden primary care setting the diagnostic usefulness of urine-LAM is limited, as a rule-in test, to a specific patient subgroup i.e. smear-negative HIV-infected TB patients with a CD4 count <200 cells/mm3, who would otherwise have required further investigation. However, even in this group sensitivity was modest. Future and adequately powered studies in a primary care setting should now specifically target patients with suspected TB who have advanced HIV infection.  相似文献   

3.

Background

Dilated cardiomyopathy and ischaemic heart disease can both lead to right ventricular (RV) dysfunction. Direct comparisons of the two entities regarding RV size and function using state-of-the-art imaging techniques have not yet been performed. We aimed to determine RV function and volume in dilated cardiomyopathy and ischaemic heart disease in relation to left ventricular (LV) systolic and diastolic function and systolic pulmonary artery pressure.

Methods and results

A well-characterised group (cardiac magnetic resonance imaging, echocardiography, coronary angiography and endomyocardial biopsy) of 46 patients with dilated cardiomyopathy was compared with LV ejection fraction (EF)-matched patients (n = 23) with ischaemic heart disease. Volumes and EF were determined with magnetic resonance imaging, diastolic LV function and pulmonary artery pressure with echocardiography.After multivariable linear regression, four factors independently influenced RVEF (R2 = 0.51, p < 0.001): LVEF (r = 0.54, p < 0.001), ratio of peak early and peak atrial transmitral Doppler flow velocity as measure of LV filling pressure (r = − 0.52, p < 0.001) and tricuspid regurgitation flow velocity as measure of pulmonary artery pressure (r = − 0.38, p = 0.001). RVEF was significantly worse in patients with dilated cardiomyopathy compared with ischaemic heart disease: median 48 % (interquartile range (IQR) 37–55 %) versus 56 % (IQR 48–63 %), p < 0.05.

Conclusions

In patients with dilated cardiomyopathy and ischaemic heart disease, RV function is determined by LV systolic and diastolic function, the underlying cause of LV dysfunction, and pulmonary artery pressure. It was demonstrated that RV function is more impaired in dilated cardiomyopathy.  相似文献   

4.

Background

Bone mineral density is known to decrease rapidly after the menopause. There is limited evidence about the separate contributions of a woman''s age, menopausal status and age at menopause to the incidence of hip fracture.

Methods and Findings

Over one million middle-aged women joined the UK Million Women Study in 1996–2001 providing information on their menopausal status, age at menopause, and other factors, which was updated, where possible, 3 y later. All women were registered with the UK National Health Service (NHS) and were routinely linked to information on cause-specific admissions to NHS hospitals. 561,609 women who had never used hormone replacement therapy and who provided complete information on menopausal variables (at baseline 25% were pre/perimenopausal and 75% postmenopausal) were followed up for a total of 3.4 million woman-years (an average 6.2 y per woman). During follow-up 1,676 (0.3%) were admitted to hospital with a first incident hip fracture. Among women aged 50–54 y the relative risk (RR) of hip fracture risk was significantly higher in postmenopausal than premenopausal women (adjusted RR 2.22, 95% confidence interval [CI] 1.22–4.04; p = 0.009); there were too few premenopausal women aged 55 y and over for valid comparisons. Among postmenopausal women, hip fracture incidence increased steeply with age (p<0.001), with rates being about seven times higher at age 70–74 y than at 50–54 y (incidence rates of 0.82 versus 0.11 per 100 women over 5 y). Among postmenopausal women of a given age there was no significant difference in hip fracture incidence between women whose menopause was due to bilateral oophorectomy compared to a natural menopause (adjusted RR 1.20, 95% CI 0.94–1.55; p = 0.15), and age at menopause had little, if any, effect on hip fracture incidence.

Conclusions

At around the time of the menopause, hip fracture incidence is about twice as high in postmenopausal than in premenopausal women, but this effect is short lived. Among postmenopausal women, age is by far the main determinant of hip fracture incidence and, for women of a given age, their age at menopause has, at most, a weak additional effect. Please see later in the article for the Editors'' Summary  相似文献   

5.

Background

Early repolarization pattern (ERP) on electrocardiogram was associated with idiopathic ventricular fibrillation and sudden cardiac arrest in a case-control study and with cardiovascular mortality in a Finnish community-based sample. We sought to determine ERP prevalence and its association with cardiac and all-cause mortality in a large, prospective, population-based case-cohort study (Monitoring of Cardiovascular Diseases and Conditions [MONICA]/KORA [Cooperative Health Research in the Region of Augsburg]) comprised of individuals of Central-European descent.

Methods and Findings

Electrocardiograms of 1,945 participants aged 35–74 y, representing a source population of 6,213 individuals, were analyzed applying a case-cohort design. Mean follow-up was 18.9 y. Cause of death was ascertained by the 9th revision of the International Classification of Disease (ICD-9) codes as documented in death certificates. ERP-attributable effects on mortality were determined by a weighted Cox proportional hazard model adjusted for covariables. Prevalence of ERP was 13.1% in our study. ERP was associated with cardiac and all-cause mortality, most pronounced in those of younger age and male sex; a clear ERP-age interaction was detected (p = 0.005). Age-stratified analyses showed hazard ratios (HRs) for cardiac mortality of 1.96 (95% confidence interval [CI] 1.05–3.68, p = 0.035) for both sexes and 2.65 (95% CI 1.21–5.83, p = 0.015) for men between 35–54 y. An inferior localization of ERP further increased ERP-attributable cardiac mortality to HRs of 3.15 (95% CI 1.58–6.28, p = 0.001) for both sexes and to 4.27 (95% CI 1.90–9.61, p<0.001) for men between 35–54 y. HRs for all-cause mortality were weaker but reached significance.

Conclusions

We found a high prevalence of ERP in our population-based cohort of middle-aged individuals. ERP was associated with about a 2- to 4-fold increased risk of cardiac mortality in individuals between 35 and 54 y. An inferior localization of ERP was associated with a particularly increased risk. Please see later in the article for the Editors'' Summary  相似文献   

6.

Background

Monitoring and evaluation (M&E) of HIV care and treatment programs is impacted by losses to follow-up (LTFU) in the patient population. The severity of this effect is undeniable but its extent unknown. Tracing all lost patients addresses this but census methods are not feasible in programs involving rapid scale-up of HIV treatment in the developing world. Sampling-based approaches and statistical adjustment are the only scaleable methods permitting accurate estimation of M&E indices.

Methodology/Principal Findings

In a large antiretroviral therapy (ART) program in western Kenya, we assessed the impact of LTFU on estimating patient mortality among 8,977 adult clients of whom, 3,624 were LTFU. Overall, dropouts were more likely male (36.8% versus 33.7%; p = 0.003), and younger than non-dropouts (35.3 versus 35.7 years old; p = 0.020), with lower median CD4 count at enrollment (160 versus 189 cells/ml; p<0.001) and WHO stage 3–4 disease (47.5% versus 41.1%; p<0.001). Urban clinic clients were 75.0% of non-dropouts but 70.3% of dropouts (p<0.001). Of the 3,624 dropouts, 1,143 were sought and 621 had their vital status ascertained. Statistical techniques were used to adjust mortality estimates based on information obtained from located LTFU patients. Observed mortality estimates one year after enrollment were 1.7% (95% CI 1.3%–2.0%), revised to 2.8% (2.3%–3.1%) when deaths discovered through outreach were added and adjusted to 9.2% (7.8%–10.6%) and 9.9% (8.4%–11.5%) through statistical modeling depending on the method used. The estimates 12 months after ART initiation were 1.7% (1.3%–2.2%), 3.4% (2.9%–4.0%), 10.5% (8.7%–12.3%) and 10.7% (8.9%–12.6%) respectively.

Conclusions/Significance Abstract

Assessment of the impact of LTFU is critical in program M&E as estimated mortality based on passive monitoring may underestimate true mortality by up to 80%. This bias can be ameliorated by tracing a sample of dropouts and statistically adjust the mortality estimates to properly evaluate and guide large HIV care and treatment programs.  相似文献   

7.

Background

The objective of this study was to assess the benefit of temporary combination antiretroviral therapy (cART) during primary HIV infection (PHI).

Methods and Findings

Adult patients with laboratory evidence of PHI were recruited in 13 HIV treatment centers in the Netherlands and randomly assigned to receive no treatment or 24 or 60 wk of cART (allocation in a 1∶1∶1 ratio); if therapy was clinically indicated, participants were randomized over the two treatment arms (allocation in a 1∶1 ratio). Primary end points were (1) viral set point, defined as the plasma viral load 36 wk after randomization in the no treatment arm and 36 wk after treatment interruption in the treatment arms, and (2) the total time that patients were off therapy, defined as the time between randomization and start of cART in the no treatment arm, and the time between treatment interruption and restart of cART in the treatment arms. cART was (re)started in case of confirmed CD4 cell count <350 cells/mm3 or symptomatic HIV disease. In total, 173 participants were randomized. The modified intention-to-treat analysis comprised 168 patients: 115 were randomized over the three study arms, and 53 randomized over the two treatment arms. Of the 115 patients randomized over the three study arms, mean viral set point was 4.8 (standard deviation 0.6) log10 copies/ml in the no treatment arm, and 4.0 (1.0) and 4.3 (0.9) log10 copies/ml in the 24- and 60-wk treatment arms (between groups: p<0.001). The median total time off therapy in the no treatment arm was 0.7 (95% CI 0.0–1.8) y compared to 3.0 (1.9–4.2) and 1.8 (0.5–3.0) y in the 24- and 60-wk treatment arms (log rank test, p<0.001). In the adjusted Cox analysis, both 24 wk (hazard ratio 0.42 [95% CI 0.25–0.73]) and 60 wk of early treatment (hazard ratio 0.55 [0.32–0.95]) were associated with time to (re)start of cART.

Conclusions

In this trial, temporary cART during PHI was found to transiently lower the viral set point and defer the restart of cART during chronic HIV infection.

Trial registration

Current Controlled Trials ISRCTN59497461 Please see later in the article for the Editors'' Summary  相似文献   

8.

Background

In sub-Saharan Africa, men living with HIV often start ART at more advanced stages of disease and have higher early mortality than women. We investigated gender difference in long-term immune reconstitution.

Methods/Principal Findings

Antiretroviral-naïve adults who received ART for at least 9 months in four HIV programs in sub-Saharan Africa were included. Multivariate mixed linear models were used to examine gender differences in immune reconstitution on first line ART.A total of 21,708 patients (68% women) contributed to 61,912 person-years of follow-up. At ART start,. Median CD4 at ART were 149 [IQR 85–206] for women and 125 cells/µL [IQR 63–187] for men. After the first year on ART, immune recovery was higher in women than in men, and gender-based differences increased by 20 CD4 cells/µL per year on average (95% CI 16–23; P<0.001). Up to 6 years after ART start, patients with low initial CD4 levels experienced similar gains compared to patients with high initial levels, including those with CD4>250cells/µL (difference between patients with <50 cells/µL and those with >250 was 284 cells/µL; 95% CI 272–296; LR test for interaction with time p = 0.63). Among patients with initial CD4 count of 150–200 cells/µL, women reached 500 CD4 cells after 2.4 years on ART (95% CI 2.4–2.5) and men after 4.5 years (95% CI 4.1–4.8) of ART use.

Conclusion

Women achieved better long-term immune response to ART, reaching CD4 level associated with lower risks of AIDS related morbidity and mortality quicker than men.  相似文献   

9.

Background

Factors associated with serologic hepatitis B virus (HBV) outcomes in HIV-infected individuals remain incompletely understood, yet such knowledge may lead to improvements in the prevention and treatment of chronic HBV infection.

Methods and Findings

HBV-HIV co-infected cohort participants were retrospectively analyzed. HBV serologic outcomes were classified as chronic, resolved, and isolated-HBcAb. Chronic HBV (CHBV) was defined as the presence of HBsAg on two or more occasions at least six months apart. Risk factors for HBV serologic outcome were assessed using logistic regression. Of 2037 participants with HBV infection, 281 (14%) had CHBV. Overall the proportions of HBV infections classified as CHBV were 11%, 16%, and 19% for CD4 cell count strata of ≥500, 200–499, and <200, respectively (p<0.0001). Risk of CHBV was increased for those with HBV infection occurring after HIV diagnosis (OR 2.62; 95% CI 1.78–3.85). This included the subset with CD4 count ≥500 cells/µL where 21% of those with HBV after HIV diagnosis had CHBV compared with 9% for all other cases of HBV infection in this stratum (p = 0.0004). Prior receipt of HAART was associated with improved HBV serologic outcome overall (p = 0.012), and specifically among those with HBV after HIV (p = 0.002). In those with HBV after HIV, HAART was associated with reduced risk of CHBV overall (OR 0.18; 95% CI 0.04–0.79); including reduced risk in the subsets with CD4 ≥350 cells/µL (p<0.001) and CD4 ≥500 cells/µL (p = 0.01) where no cases of CHBV were seen in those with a recent history of HAART use.

Conclusions

Clinical indicators of immunologic status in HIV-infected individuals, such as CD4 cell count, are associated with HBV serologic outcome. These data suggest that immunologic preservation through the increased use of HAART to improve functional anti-HBV immunity, whether by improved access to care or earlier initiation of therapy, would likely improve HBV infection outcomes in HIV-infected individuals.  相似文献   

10.

Background

Women may have persistent risk of HIV acquisition during pregnancy and postpartum. Estimating risk of HIV during these periods is important to inform optimal prevention approaches. We performed a systematic review and meta-analysis to estimate maternal HIV incidence during pregnancy/postpartum and to compare mother-to-child HIV transmission (MTCT) risk among women with incident versus chronic infection.

Methods and Findings

We searched PubMed, Embase, and AIDS-related conference abstracts between January 1, 1980, and October 31, 2013, for articles and abstracts describing HIV acquisition during pregnancy/postpartum. The inclusion criterion was studies with data on recent HIV during pregnancy/postpartum. Random effects models were constructed to pool HIV incidence rates, cumulative HIV incidence, hazard ratios (HRs), or odds ratios (ORs) summarizing the association between pregnancy/postpartum status and HIV incidence, and MTCT risk and rates. Overall, 1,176 studies met the search criteria, of which 78 met the inclusion criterion, and 47 contributed data. Using data from 19 cohorts representing 22,803 total person-years, the pooled HIV incidence rate during pregnancy/postpartum was 3.8/100 person-years (95% CI 3.0–4.6): 4.7/100 person-years during pregnancy and 2.9/100 person-years postpartum (p = 0.18). Pooled cumulative HIV incidence was significantly higher in African than non-African countries (3.6% versus 0.3%, respectively; p<0.001). Risk of HIV was not significantly higher among pregnant (HR 1.3, 95% CI 0.5–2.1) or postpartum women (HR 1.1, 95% CI 0.6–1.6) than among non-pregnant/non-postpartum women in five studies with available data. In African cohorts, MTCT risk was significantly higher among women with incident versus chronic HIV infection in the postpartum period (OR 2.9, 95% CI 2.2–3.9) or in pregnancy/postpartum periods combined (OR 2.3, 95% CI 1.2–4.4). However, the small number of studies limited power to detect associations and sources of heterogeneity.

Conclusions

Pregnancy and the postpartum period are times of persistent HIV risk, at rates similar to “high risk” cohorts. MTCT risk was elevated among women with incident infections. Detection and prevention of incident HIV in pregnancy/postpartum should be prioritized, and is critical to decrease MTCT. Please see later in the article for the Editors'' Summary  相似文献   

11.

Background

Artemisinin combination treatments (ACT) are recommended as first line treatment for falciparum malaria throughout the malaria affected world. We reviewed the efficacy of a 3-day regimen of mefloquine and artesunate regimen (MAS3), over a 13 year period of continuous deployment as first-line treatment in camps for displaced persons and in clinics for migrant population along the Thai-Myanmar border.

Methods and Findings

3,264 patients were enrolled in prospective treatment trials between 1995 and 2007 and treated with MAS3. The proportion of patients with parasitaemia persisting on day-2 increased significantly from 4.5% before 2001 to 21.9% since 2002 (p<0.001). Delayed parasite clearance was associated with increased risk of developing gametocytaemia (AOR = 2.29; 95% CI, 2.00–2.69, p = 0.002). Gametocytaemia on admission and carriage also increased over the years (p = 0.001, test for trend, for both). MAS3 efficacy has declined slightly but significantly (Hazards ratio 1.13; 95% CI, 1.07–1.19, p<0.001), although efficacy in 2007 remained well within acceptable limits: 96.5% (95% CI, 91.0–98.7). The in vitro susceptibility of P. falciparum to artesunate increased significantly until 2002, but thereafter declined to levels close to those of 13 years ago (geometric mean in 2007: 4.2 nM/l; 95% CI, 3.2–5.5). The proportion of infections caused by parasites with increased pfmdr1 copy number rose from 30% (12/40) in 1996 to 53% (24/45) in 2006 (p = 0.012, test for trend).

Conclusion

Artesunate-mefloquine remains a highly efficacious antimalarial treatment in this area despite 13 years of widespread intense deployment, but there is evidence of a modest increase in resistance. Of particular concern is the slowing of parasitological response to artesunate and the associated increase in gametocyte carriage.  相似文献   

12.

Background

Attacks of hereditary angioedema (HAE) are unpredictable and, if affecting the upper airway, can be lethal. Icatibant is used for physician- or patient self-administered symptomatic treatment of HAE attacks in adults. Its mode of action includes disruption of the bradykinin pathway via blockade of the bradykinin B2 receptor. Early treatment is believed to shorten attack duration and prevent severe outcomes; however, evidence to support these benefits is lacking.

Objective

To examine the impact of timing of icatibant administration on the duration and resolution of HAE type I and II attacks.

Methods

The Icatibant Outcome Survey is an international, prospective, observational study for patients treated with icatibant. Data on timings and outcomes of icatibant treatment for HAE attacks were collected between July 2009–February 2012. A mixed-model of repeated measures was performed for 426 attacks in 136 HAE type I and II patients.

Results

Attack duration was significantly shorter in patients treated <1 hour of attack onset compared with those treated ≥1 hour (6.1 hours versus 16.8 hours [p<0.001]). Similar significant effects were observed for <2 hours versus ≥2 hours (7.2 hours versus 20.2 hours [p<0.001]) and <5 hours versus ≥5 hours (8.0 hours versus 23.5 hours [p<0.001]). Treatment within 1 hour of attack onset also significantly reduced time to attack resolution (5.8 hours versus 8.8 hours [p<0.05]). Self-administrators were more likely to treat early and experience shorter attacks than those treated by a healthcare professional.

Conclusion

Early blockade of the bradykinin B2 receptor with icatibant, particularly within the first hour of attack onset, significantly reduced attack duration and time to attack resolution.  相似文献   

13.

Context

As life expectancy improves among Human Immunodeficiency Virus (HIV) patients, renal and cardiovascular diseases are increasingly prevalent in this population. Renal and cardiovascular disease are mutual risk factors and are characterized by albuminuria. Understanding the interactions between HIV, cardiovascular risk factors and renal disease is the first step in tackling this new therapeutic frontier in HIV.

Methods

In a rural primary health care centre, 903 HIV-infected adult patients were randomly selected and data on HIV-infection and cardiovascular risk factors were collected. Glomerular filtration rate (eGFR) was estimated. Albuminuria was defined as an Albumin-Creatinine-Ratio above 30 mg/g. Multivariate logistic regression analysis was used to analyse albuminuria and demographic, clinical and HIV-associated variables.

Results

The study population consisted of 903 HIV-infected patients, with a median age of 40 years (Inter-Quartile Range (IQR) 34–48 years), and included 625 (69%) women. The median duration since HIV diagnosis was 26 months (IQR 12–58 months) and 787 (87%) received antiretroviral therapy. Thirty-six (4%) of the subjects were shown to have diabetes and 205 (23%) hypertension. In the cohort, 21% had albuminuria and 2% an eGFR <60 mL/min/1.73m2. Albuminuria was associated with hypertension (adjusted odds ratio (aOR) 1.59; 95% confidence interval (CI) 1.05–2.41; p<0.05), total cholesterol (aOR 1.31; 95% CI 1.11–1.54; p<0.05), eGFR (aOR 0.98; 95% CI 0.97–0.99; p<0.001) and detectable viral load (aOR 2.74; 95% CI 1.56–4.79; p<0.001). Hypertension was undertreated: 78% were not receiving treatment, while another 11% were inadequately treated. No patients were receiving lipid-lowering medication.

Conclusion

Glomerular filtration rate was well conserved, while albuminuria was common amongst HIV-infected patients in rural South Africa. Both cardiovascular and HIV-specific variables were associated with albuminuria. Improved cardiovascular risk prevention as well as adequate virus suppression might be the key to escape the vicious circle of renal failure and cardiovascular disease and improve the long-term prognosis of HIV-infected patients.  相似文献   

14.
15.

Background

We performed a systematic review to assess the effect of integrated perinatal prevention of mother-to-child transmission of HIV interventions compared to non- or partially integrated services on the uptake in low- and middle-income countries.

Methods

We searched for experimental, quasi-experimental and controlled observational studies in any language from 21 databases and grey literature sources.

Results

Out of 28 654 citations retrieved, five studies met our inclusion criteria. A cluster randomized controlled trial reported higher probability of nevirapine uptake at the labor wards implementing HIV testing and structured nevirapine adherence assessment (RRR 1.37, bootstrapped 95% CI, 1.04–1.77). A stepped wedge design study showed marked improvement in antiretroviral therapy (ART) enrolment (44.4% versus 25.3%, p<0.001) and initiation (32.9% versus 14.4%, p<0.001) in integrated care, but the median gestational age of ART initiation (27.1 versus 27.7 weeks, p = 0.4), ART duration (10.8 versus 10.0 weeks, p = 0.3) or 90 days ART retention (87.8% versus 91.3%, p = 0.3) did not differ significantly. A cohort study reported no significant difference either in the ART coverage (55% versus 48% versus 47%, p = 0.29) or eight weeks of ART duration before the delivery (50% versus 42% versus 52%; p = 0.96) between integrated, proximal and distal partially integrated care. Two before and after studies assessed the impact of integration on HIV testing uptake in antenatal care. The first study reported that significantly more women received information on PMTCT (92% versus 77%, p<0.001), were tested (76% versus 62%, p<0.001) and learned their HIV status (66% versus 55%, p<0.001) after integration. The second study also reported significant increase in HIV testing uptake after integration (98.8% versus 52.6%, p<0.001).

Conclusion

Limited, non-generalizable evidence supports the effectiveness of integrated PMTCT programs. More research measuring coverage and other relevant outcomes is urgently needed to inform the design of services delivering PMTCT programs.  相似文献   

16.

Aim

To analyze alcohol use, clinical data and laboratory parameters that may affect FIB-4, an index for measuring liver fibrosis, in HCV-monoinfected and HCV/HIV-coinfected drug users.

Patients and Methods

Patients admitted for substance abuse treatment between 1994 and 2006 were studied. Socio-demographic data, alcohol and drug use characteristics and clinical variables were obtained through hospital records. Blood samples for biochemistry, liver function tests, CD4 cell count, and serology of HIV and HCV infection were collected at admission. Multivariate linear regression was used to analyze the predictors of FIB-4 increase.

Results

A total of 472 (83% M, 17% F) patients were eligible. The median age at admission was 31 years (Interquartile range (IQR) 27–35 years), and the median duration of drug use was 10 years (IQR 5.5–15 years). Unhealthy drinking (>50 grams/day) was reported in 32% of the patients. The FIB-4 scores were significantly greater in the HCV/HIV-coinfected patients (1.14, IQR 0.76–1.87) than in the HCV-monoinfected patients (0.75, IQR 0.56–1.11) (p<0.001). In the multivariate analysis, unhealthy drinking (p = 0.034), lower total cholesterol (p = 0.042), serum albumin (p<0.001), higher GGT (p<0.001) and a longer duration of addiction (p = 0.005) were independently associated with higher FIB-4 scores in the HCV-monoinfected drug users. The effect of unhealthy drinking on FIB-4 scores disappeared in the HCV/HIV-coinfected patients, whereas lower serum albumin (p<0.001), a lower CD4 cell count (p = 0.006), higher total bilirubin (p<0.001) and a longer drug addiction duration (p<0.001) were significantly associated with higher FIB-4 values.

Conclusions

Unhealthy alcohol use in the HCV-monoinfected patients and HIV-related immunodeficiency in the HCV/HIV-coinfected patients are important risk factors associated with liver fibrosis in the respective populations  相似文献   

17.

Background

In 2006, Brazil began routine immunization of infants <15 wk of age with a single-strain rotavirus vaccine. We evaluated whether the rotavirus vaccination program was associated with declines in childhood diarrhea deaths and hospital admissions by monitoring disease trends before and after vaccine introduction in all five regions of Brazil with varying disease burden and distinct socioeconomic and health indicators.

Methods and Findings

National data were analyzed with an interrupted time-series analysis that used diarrhea-related mortality or hospitalization rates as the main outcomes. Monthly mortality and admission rates estimated for the years after rotavirus vaccination (2007–2009) were compared with expected rates calculated from pre-vaccine years (2002–2005), adjusting for secular and seasonal trends. During the three years following rotavirus vaccination in Brazil, rates for diarrhea-related mortality and admissions among children <5 y of age were 22% (95% confidence interval 6%–44%) and 17% (95% confidence interval 5%–27%) lower than expected, respectively. A cumulative total of ∼1,500 fewer diarrhea deaths and 130,000 fewer admissions were observed among children <5 y during the three years after rotavirus vaccination. The largest reductions in deaths (22%–28%) and admissions (21%–25%) were among children younger than 2 y, who had the highest rates of vaccination. In contrast, lower reductions in deaths (4%) and admissions (7%) were noted among children two years of age and older, who were not age-eligible for vaccination during the study period.

Conclusions

After the introduction of rotavirus vaccination for infants, significant declines for three full years were observed in under-5-y diarrhea-related mortality and hospital admissions for diarrhea in Brazil. The largest reductions in diarrhea-related mortality and hospital admissions for diarrhea were among children younger than 2 y, who were eligible for vaccination as infants, which suggests that the reduced diarrhea burden in this age group was associated with introduction of the rotavirus vaccine. These real-world data are consistent with evidence obtained from clinical trials and strengthen the evidence base for the introduction of rotavirus vaccination as an effective measure for controlling severe and fatal childhood diarrhea. Please see later in the article for the Editors'' Summary  相似文献   

18.

Background

Antiretroviral Treatment (ART) significantly reduces HIV transmission. We conducted a cost-effectiveness analysis of the impact of expanded ART in South Africa.

Methods

We model a best case scenario of 90% annual HIV testing coverage in adults 15–49 years old and four ART eligibility scenarios: CD4 count <200 cells/mm3 (current practice), CD4 count <350, CD4 count <500, all CD4 levels. 2011–2050 outcomes include deaths, disability adjusted life years (DALYs), HIV infections, cost, and cost per DALY averted. Service and ART costs reflect South African data and international generic prices. ART reduces transmission by 92%. We conducted sensitivity analyses.

Results

Expanding ART to CD4 count <350 cells/mm3 prevents an estimated 265,000 (17%) and 1.3 million (15%) new HIV infections over 5 and 40 years, respectively. Cumulative deaths decline 15%, from 12.5 to 10.6 million; DALYs by 14% from 109 to 93 million over 40 years. Costs drop $504 million over 5 years and $3.9 billion over 40 years with breakeven by 2013. Compared with the current scenario, expanding to <500 prevents an additional 585,000 and 3 million new HIV infections over 5 and 40 years, respectively. Expanding to all CD4 levels decreases HIV infections by 3.3 million (45%) and costs by $10 billion over 40 years, with breakeven by 2023. By 2050, using higher ART and monitoring costs, all CD4 levels saves $0.6 billion versus current; other ART scenarios cost $9–194 per DALY averted. If ART reduces transmission by 99%, savings from all CD4 levels reach $17.5 billion. Sensitivity analyses suggest that poor retention and predominant acute phase transmission reduce DALYs averted by 26% and savings by 7%.

Conclusion

Increasing the provision of ART to <350 cells/mm3 may significantly reduce costs while reducing the HIV burden. Feasibility including HIV testing and ART uptake, retention, and adherence should be evaluated.  相似文献   

19.

Background

Severe hypoglycemia is a major complication of insulin treatment in patients with type 1 diabetes, limiting full realization of glycemic control. It has been shown in the past that low levels of hemoglobin A1c (HbA1c), a marker of average plasma glucose, predict a high risk of severe hypoglycemia, but it is uncertain whether this association still exists. Based on advances in diabetes technology and pharmacotherapy, we hypothesized that the inverse association between severe hypoglycemia and HbA1c has decreased in recent years.

Methods and Findings

We analyzed data of 37,539 patients with type 1 diabetes (mean age ± standard deviation 14.4±3.8 y, range 1–20 y) from the DPV (Diabetes Patienten Verlaufsdokumentation) Initiative diabetes cohort prospectively documented between January 1, 1995, and December 31, 2012. The DPV cohort covers an estimated proportion of >80% of all pediatric diabetes patients in Germany and Austria. Associations of severe hypoglycemia, hypoglycemic coma, and HbA1c levels were assessed by multivariable regression analysis. From 1995 to 2012, the relative risk (RR) for severe hypoglycemia and coma per 1% HbA1c decrease declined from 1.28 (95% CI 1.19–1.37) to 1.05 (1.00–1.09) and from 1.39 (1.23–1.56) to 1.01 (0.93–1.10), respectively, corresponding to a risk reduction of 1.2% (95% CI 0.6–1.7, p<0.001) and 1.9% (0.8–2.9, p<0.001) each year, respectively. Risk reduction of severe hypoglycemia and coma was strongest in patients with HbA1c levels of 6.0%–6.9% (RR 0.96 and 0.90 each year) and 7.0%–7.9% (RR 0.96 and 0.89 each year). From 1995 to 2012, glucose monitoring frequency and the use of insulin analogs and insulin pumps increased (p<0.001). Our study was not designed to investigate the effects of different treatment modalities on hypoglycemia risk. Limitations are that associations between diabetes education and physical activity and severe hypoglycemia were not addressed in this study.

Conclusions

The previously strong association of low HbA1c with severe hypoglycemia and coma in young individuals with type 1 diabetes has substantially decreased in the last decade, allowing achievement of near-normal glycemic control in these patients. Please see later in the article for the Editors'' Summary  相似文献   

20.

Background

Self-reported data are often used for estimates on healthcare utilization in cost-effectiveness studies.

Objective

To analyze older adults’ self-report of healthcare utilization compared to data obtained from the general practitioners’ (GP) electronic medical record (EMR) and to study the differences in healthcare utilization between those who completed the study, those who did not respond, and those lost to follow-up.

Methods

A prospective cohort study was conducted among community-dwelling persons aged 70 years and above, without dementia and not living in a nursing home. Self-reporting questionnaires were compared to healthcare utilization data extracted from the EMR at the GP-office.

Results

Overall, 790 persons completed questionnaires at baseline, median age 75 years (IQR 72–80), 55.8% had no disabilities in (instrumental) activities of daily living. Correlations between self-report data and EMR data on healthcare utilization were substantial for ‘hospitalizations’ and ‘GP home visits’ at 12 months intraclass correlation coefficient 0.63 (95% CI; 0.58–0.68). Compared to the EMR, self-reported healthcare utilization was generally slightly over-reported. Non-respondents received more GP home visits (p<0.05). Of the participants who died or were institutionalized 62.2% received 2 or more home visits (p<0.001) and 18.9% had 2 or more hospital admissions (p<0.001) versus respectively 18.6% and 3.9% of the participants who completed the study. Of the participants lost to follow-up for other reasons 33.0% received 2 or more home visits (p<0.01) versus 18.6 of the participants who completed the study.

Conclusions

Self-report of hospitalizations and GP home visits in a broadly ‘healthy’ community-dwelling older population seems adequate and efficient. However, as people become older and more functionally impaired, collecting healthcare utilization data from the EMR should be considered to avoid measurement bias, particularly if the data will be used to support economic evaluation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号