首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

Purpose

To study the relationship between outdoor activity and risk of age-related cataract (ARC) in a rural population of Taizhou Eye Study (phrase 1 report).

Method

A population-based, cross-sectional study of 2006 eligible rural adults (≥45 years old) from Taizhou Eye Study was conducted from Jul. to Sep. 2012. Participants underwent detailed ophthalmologic examinations including uncorrected visual acuity (UCVA), best corrected visual acuity (BCVA), intraocular pressure (IOP), slit lamp and fundus examinations as well as questionnaires about previous outdoor activity and sunlight protection methods. ARC was recorded by LOCSⅢ classification system. The prevalence of cortical, nuclear and posterior subcapsular cataract were assessed separately for the risk factors and its association with outdoor activity.

Results

Of all 2006 eligible participants, 883 (44.0%) adults were diagnosed with ARC. The prevalence rates of cortical, nuclear and posterior subcapsular cataract per person were 41.4%, 30.4% and 1.5%, respectively. Women had a higher tendency of nuclear and cortical cataract than men (OR = 1.559, 95% CI 1.204–2.019 and OR = 1.862, 95% CI 1.456–2.380, respectively). Adults with high myopia had a higher prevalence of nuclear cataract than adults without that (OR = 2.528, 95% CI 1.055–6.062). Multivariable logistic regression revealed that age was risk factor of nuclear (OR = 1.190, 95% CI 1.167–1.213) and cortical (OR = 1.203, 95% CI 1.181–1.226) cataract; eyes with fundus diseases was risk factor of posterior subcapsular cataract (OR = 6.529, 95% CI 2.512–16.970). Outdoor activity was an independent risk factor of cortical cataract (OR = 1.043, 95% CI 1.004–1.083). The risk of cortical cataract increased 4.3% (95% CI 0.4%-8.3%) when outdoor activity time increased every one hour. Furthermore, the risk of cortical cataract increased 1.1% (95% CI 0.1%-2.0%) when cumulative UV-B exposure time increased every one year.

Conclusion

Outdoor activity was an independent risk factor for cortical cataract, but was not risk factor for nuclear and posterior subcapsular cataract. The risk of cortical cataract increased 4.3% when outdoor activity time increased every one hour. In addition, the risk of cortical cataract increased 1.1% (95% CI 0.1%-2.0%) when cumulative UV-B exposure time increased every one year.  相似文献   

2.

Objectives

To evaluate the feasibility and effectiveness of dried blood spots (DBS) use for viral load (VL) monitoring, describing patient outcomes and programmatic challenges that are relevant for DBS implementation in sub-Saharan Africa.

Methods

We recruited adult antiretroviral therapy (ART) patients from five district hospitals in Malawi. Eligibility reflected anticipated Ministry of Health VL monitoring criteria. Testing was conducted at a central laboratory. Virological failure was defined as >5000 copies/ml. Primary outcomes were program feasibility (timely result availability and patient receipt) and effectiveness (second-line therapy initiation).

Results

We enrolled 1,498 participants; 5.9% were failing at baseline. Median time from enrollment to receipt of results was 42 days; 79.6% of participants received results within 3 months. Among participants with confirmed elevated VL, 92.6% initiated second-line therapy; 90.7% were switched within 365 days of VL testing. Nearly one-third (30.8%) of participants with elevated baseline VL had suppressed (<5,000 copies/ml) on confirmatory testing. Median period between enrollment and specimen testing was 23 days. Adjusting for relevant covariates, participants on ART >4 years were more likely to be failing than participants on therapy 1–4 years (RR 1.7, 95% CI 1.0-2.8); older participants were less likely to be failing (RR 0.95, 95% CI 0.92-0.98). There was no difference in likelihood of failure based on clinical symptoms (RR 1.17, 95% CI 0.65-2.11).

Conclusions

DBS for VL monitoring is feasible and effective in real-world clinical settings. Centralized DBS testing may increase access to VL monitoring in remote settings. Programmatic outcomes are encouraging, especially proportion of eligible participants switched to second-line therapy.  相似文献   

3.
4.
5.

Background

Killed oral cholera vaccines (OCVs) have been licensed for use in developing countries, but protection conferred by licensed OCVs beyond two years of follow-up has not been demonstrated in randomized, clinical trials.

Methods/Principal Findings

We conducted a cluster-randomized, placebo-controlled trial of a two-dose regimen of a low-cost killed whole cell OCV in residents 1 year of age and older living in 3,933 clusters in Kolkata, India. The primary endpoint was culture-proven Vibrio cholerae O1 diarrhea episodes severe enough to require treatment in a health care facility. Of the 66,900 fully dosed individuals (31,932 vaccinees and 34,968 placebo recipients), 38 vaccinees and 128 placebo-recipients developed cholera during three years of follow-up (protective efficacy 66%; one-sided 95%CI lower bound = 53%, p<0.001). Vaccine protection during the third year of follow-up was 65% (one-sided 95%CI lower bound = 44%, p<0.001). Significant protection was evident in the second year of follow-up in children vaccinated at ages 1–4 years and in the third year in older age groups.

Conclusions/Significance

The killed whole-cell OCV conferred significant protection that was evident in the second year of follow-up in young children and was sustained for at least three years in older age groups. Continued follow-up will be important to establish the vaccine''s duration of protection.

Trial Registration

ClinicalTrials.gov NCT00289224.  相似文献   

6.

Objective

To evaluate the incidence rate of Chronic Kidney Disease (CKD) stage 3-5 (persistent decreased kidney function under 60 mL/min per 1.73 m2) among patients with type 2 diabetes over five years, to identify the risk factors associated with CKD, and develop a risk table to predict five-year CKD stage 3-5 risk stratification for clinical use.

Design

The MADIABETES Study is a prospective cohort study of 3,443 outpatients with type 2 diabetes mellitus, sampled from 56 primary health care centers (131 general practitioners) in Madrid (Spain).

Results

The cumulative incidence of CKD stage 3-5 at five-years was 10.23% (95% CI = 9.12–11.44) and the incidence density was 2.07 (95% CI = 1.83–2.33) cases per 1,000 patient-months or 2.48 (95% CI = 2.19–2.79) cases per 100 patient-years. The highest hazard ratio (HR) for developing CKD stage 3-5 was albuminuria ≥300 mg/g (HR = 4.57; 95% CI= 2.46-8.48). Furthermore, other variables with a high HR were age over 74 years (HR = 3.20; 95% CI = 2.13–4.81), a history of Hypertension (HR = 2.02; 95% CI = 1.42–2.89), Myocardial Infarction (HR= 1.72; 95% IC= 1.25–2.37), Dyslipidemia (HR = 1.68; 95% CI 1.30–2.17), duration of diabetes mellitus ≥ 10 years (HR = 1.46; 95% CI = 1.14-1.88) and Systolic Blood Pressure >149 mmHg (HR = 1.52; 95% CI = 1.02–2.24).

Conclusions

After a five-year follow-up, the cumulative incidence of CKD is concordant with rates described in Spain and other countries. Albuminuria ≥ 300 mg/g and age over 74 years were the risk factors more strongly associated with developing CKD (Stage 3-5). Blood Pressure, lipid and albuminuria control could reduce CKD incidence of CKD in patients with T2DM.  相似文献   

7.

Background

Insufficient data exist on population-based trends in morbidity and mortality to determine the success of prevention strategies and improvements in health care delivery in stroke. The aim of this study was to determine trends in incidence and outcome (1-year mortality, 28-day case-fatality) in relation to management and risk factors for stroke in the multi-ethnic population of Auckland, New Zealand (NZ) over 30-years.

Methods

Four stroke incidence population-based register studies were undertaken in adult residents (aged ≥15 years) of Auckland NZ in 1981–1982, 1991–1992, 2002–2003 and 2011–2012. All used standard World Health Organization (WHO) diagnostic criteria and multiple overlapping sources of case-ascertainment for hospitalised and non-hospitalised, fatal and non-fatal, new stroke events. Ethnicity was consistently self-identified into four major groups. Crude and age-adjusted (WHO world population standard) annual incidence and mortality with corresponding 95% confidence intervals (CI) were calculated per 100,000 people, assuming a Poisson distribution.

Results

5400 new stroke patients were registered in four 12 month recruitment phases over the 30-year study period; 79% were NZ/European, 6% Māori, 8% Pacific people, and 7% were of Asian or other origin. Overall stroke incidence and 1-year mortality decreased by 23% (95% CI 5%-31%) and 62% (95% CI 36%-86%), respectively, from 1981 to 2012. Whilst stroke incidence and mortality declined across all groups in NZ from 1991, Māori and Pacific groups had the slowest rate of decline and continue to experience stroke at a significantly younger age (mean ages 60 and 62 years, respectively) compared with NZ/Europeans (mean age 75 years). There was also a decline in 28-day stroke case fatality (overall by 14%, 95% CI 11%-17%) across all ethnic groups from 1981 to 2012. However, there were significant increases in the frequencies of pre-morbid hypertension, myocardial infarction, and diabetes mellitus, but a reduction in frequency of current smoking among stroke patients.

Conclusions

In this unique temporal series of studies spanning 30 years, stroke incidence, early case-fatality and 1-year mortality have declined, but ethnic disparities in risk and outcome for stroke persisted suggesting that primary stroke prevention remains crucial to reducing the burden of this disease.  相似文献   

8.
9.

Background

Decision-making capacity to provide informed consent regarding treatment is essential among cancer patients. The purpose of this study was to identify the frequency of decision-making incapacity among newly diagnosed older patients with hematological malignancy receiving first-line chemotherapy, to examine factors associated with incapacity and assess physicians’ perceptions of patients’ decision-making incapacity.

Methods

Consecutive patients aged 65 years or over with a primary diagnosis of malignant lymphoma or multiple myeloma were recruited. Decision-making capacity was assessed using the Structured Interview for Competency and Incompetency Assessment Testing and Ranking Inventory-Revised (SICIATRI-R). Cognitive impairment, depressive condition and other possible associated factors were also evaluated.

Results

Among 139 eligible patients registered for this study, 114 completed the survey. Of these, 28 (25%, 95% confidence interval [CI]: 17%-32%) were judged as having some extent of decision-making incompetency according to SICIATRI-R. Higher levels of cognitive impairment and increasing age were significantly associated with decision-making incapacity. Physicians experienced difficulty performing competency assessment (Cohen’s kappa -0.54).

Conclusions

Decision-making incapacity was found to be a common and under-recognized problem in older patients with cancer. Age and assessment of cognitive impairment may provide the opportunity to find patients that are at a high risk of showing decision-making incapacity.  相似文献   

10.

Background

Following a negative test, the performance of fecal immunochemical testing in the subsequent screening round is rarely reported. It is crucial to allocate resources to participants who are more likely to test positive subsequently following an initial negative result.

Objective

To identify risk factors associated with a positive result in subsequent screening.

Methods

Dataset was composed of consecutive participants who voluntarily underwent fecal tests and colonoscopy in a routine medical examination at the National Taiwan University Hospital between January 2007 and December 2011. Risk factor assessment of positive fecal test in subsequent screening was performed by using the Cox proportional hazards models.

Results

Our cohort consisted of 3783 participants during a 5-year period. In three rounds of subsequent testing, 3783, 1537, and 624 participants underwent fecal tests, respectively; 5.7%, 5.1%, and 3.9% tested positive, respectively, and the positive predictive values were 40.2%, 20.3%, and 20.8%, respectively. Age ≥60 years (adjusted hazard ratio: 1.53, 95% CI: 1.21–1.93) and male gender (1.32, 95% CI: 1.02–1.69) were risk factors; however, an interaction between age and gender was noted. Men had higher risk than women when they were <60 years of age (p = 0.002), while this difference was no longer observed when ≥60 years of age (p = 0.74). The optimal interval of screening timing for participant with baseline negative fecal test was 2 years.

Conclusions

Following a negative test, older age and male gender are risk factors for a positive result in the subsequent rounds while the gender difference diminishes with age. Biennial screening is sufficient following a negative fecal test.  相似文献   

11.

Objective

To assess the prognostic value of 12-months N-Terminal Pro-Brain Natriuretic Peptide (NT-proBNP) levels on adverse cardiovascular events in patients with stable coronary heart disease.

Methods

NT-proBNP concentrations were measured at baseline and at 12-months follow-up in participants of cardiac rehabilitation (median follow-up 8.96 years). Cox-proportional hazards models evaluated the prognostic value of log-transformed NT-proBNP levels, and of 12-months NT-proBNP relative changes on adverse cardiovascular events adjusting for established risk factors measured at baseline.

Results

Among 798 participants (84.7% men, mean age 59 years) there were 114 adverse cardiovascular events. 12-months NT-proBNP levels were higher than baseline levels in 60 patients (7.5%) and numerically more strongly associated with the outcome in multivariable analysis (HR 1.65 [95% CI 1.33–2.05] vs. HR 1.41 [95% CI 1.12–1.78], with a net reclassification improvement (NRI) of 0.098 [95% CI 0.002–0.194] compared to NRI of 0.047 [95% CI −0.0004–0.133] for baseline NT-proBNP levels. A 12-month 10% increment of NT-proBNP was associated with a HR of 1.35 [95% CI 1.12–1.63] for the onset of an adverse cardiovascular event. Subjects with a 12-month increment of NT-proBNP had a HR of 2.56 [95% CI 1.10–5.95] compared to those with the highest 12-months reduction.

Conclusions

Twelve-months NT-proBNP levels after an acute cardiovascular event are strongly associated with a subsequent event and may provide numerically better reclassification of patients at risk for an adverse cardiovascular event compared to NT-proBNP baseline levels after adjustment for established risk factors.  相似文献   

12.

Background

We combined the outcomes of all randomised controlled trials to investigate the safety and efficacy of steroid avoidance or withdrawal (SAW) regimens in paediatric kidney transplantation compared with steroid-based (SB) regimens.

Methods

A systematic literature search of PubMed, Embase, Cochrane Library, the trials registry and BIOSIS previews was performed. A change in the height standardised Z-score from baseline (ΔHSDS) and acute rejection were the primary endpoints.

Results

Eight reports from 5 randomised controlled trials were included, with a total of 528 patients. Sufficient evidence of a significant increase in the ΔHSDS was observed in the SAW group (mean difference (MD) = 0.38, 95% confidence interval (CI) 0.07–0.68, P = 0.01), particularly within the first year post-withdrawal (MD = 0.22, 95% CI 0.10–0.35, P = 0.0003) and in the prepubertal recipients (MD = 0.60, 95% CI 0.21–0.98, P = 0.002). There was no significant difference in the risk of acute rejection between the groups (relative risk = 1.04, 95% CI 0.80–1.36, P = 0.77).

Conclusions

The SAW regimen is justified in select paediatric renal allograft recipients because it provides significant benefits in post-transplant growth within the first year post-withdrawal with minimal effects on the risk of acute rejection, graft function, and graft and patient survival within 3 years post-withdrawal. These select paediatric recipients should have the following characteristics: prepubertal; Caucasian; with primary disease not related to immunological factors; de novo kidney transplant recipient; with low panel reactive antibody.  相似文献   

13.

Background

Compared to Caucasians, Chinese achieve a higher blood concentration of statin for a given dose. It remains unknown whether this translates to increased risk of serious statin-associated adverse events amongst Chinese patients.

Methods

We conducted a population-based retrospective cohort study of older adults (mean age, 74 years) newly prescribed a statin in Ontario, Canada between 2002 and 2013, where 19,033 Chinese (assessed through a validated surname algorithm) were matched (1:3) by propensity score to 57,099 non-Chinese. This study used linked healthcare databases.

Findings

The follow-up observation period (mean 1.1, maximum 10.8 years) was similar between groups, as were the reasons for censoring the observation period (end of follow-up, death, or statin discontinuation). Forty-seven percent (47%) of Chinese were initiated on a higher than recommended statin dose. Compared to non-Chinese, Chinese ethnicity did not associate with any of the four serious statin-associated adverse events assessed in this study [rhabdomyolysis hazard ratio (HR) 0.61 (95% CI 0.28 to 1.34), incident diabetes HR 1.02 (95% CI 0.80 to 1.30), acute kidney injury HR 0.90 (95% CI 0.72 to 1.13), or all-cause mortality HR 0.88 (95% CI 0.74 to 1.05)]. Similar results were observed in subgroups defined by statin type and dose.

Conclusions

We observed no higher risk of serious statin toxicity in Chinese than matched non-Chinese older adults with similar indicators of baseline health. Regulatory agencies should review available data, including findings from our study, to decide if a change in their statin dosing recommendations for people of Chinese ethnicity is warranted.  相似文献   

14.

Background

The purpose of the study was to investigate the association between depression and/or depressive symptoms during pregnancy and the risk of an operative delivery or preeclampsia, and to quantify the strength of the association.

Methods

A search of the PubMed, SCI/SSCI, Proquest PsycARTICLES and CINAHL databases was supplemented by manual searches of bibliographies of key retrieved articles and review articles. We aimed to include case control or cohort studies that reported data on antenatal depression and /or depressive symptoms and the risk of an operative delivery and/or preeclampsia.

Results

Twelve studies with self-reported screening instruments were eligible for inclusion with a total of 8400 participants. Seven articles that contained 4421 total participants reported the risk for an operative delivery, and five articles that contained 3979 total participants reported the risk for preeclampsia. The pooled analyses showed that both operative delivery and preeclampsia had a statistically significant association with antenatal depressive symptoms (RR = 1.24; 95% CI, 1.14 to 1.35, and OR = 1.63, 95% CI, 1.32 to 2.02, respectively). When the pre-pregnancy body mass indices were controlled in their initial design, the risk for preeclampsia still existed (OR = 1.48, 95% CI, 1.04 to 2.01), while the risk for an operative delivery became uncertain (RR = 1.01, 95% CI, 0.85 to 1.22).

Conclusions

Antenatal depressive symptoms are associated with a moderately increased risk of an operative delivery and preeclampsia. An abnormal pre-pregnancy body mass index may modify this association.  相似文献   

15.

Background

In 2011, a new variant of influenza A(H3N2) emerged that contained a recombination of genes from swine H3N2 viruses and the matrix (M) gene of influenza A(H1N1)pdm09 virus. New combinations and variants of pre-existing influenza viruses are worrisome if there is low or nonexistent immunity in a population, which increases chances for an outbreak or pandemic.

Methods

Sera collected in 2011 were obtained from US Department of Defense service members in three age groups: 19–21 years, 32–33 years, and 47–48 years. Pre- and post-vaccination samples were available for the youngest age group, and postvaccination samples for the two older groups. Specimens were tested using microneutralization assays for antibody titers against H3N2v (A/Indiana/10/2011) and seasonal H3N2 virus (A/Perth/16/2009).

Results

The youngest age group had significantly (p<0.05) higher geometric mean titers for H3N2v with 165 (95% confidence interval [CI]: 105–225) compared with the two older groups, aged 32–33 and 47–48 years, who had geometric mean titers of 68 (95% CI: 55–82) and 46 (95% CI: 24–65), respectively. Similarly, the youngest age group also had the highest geometric mean titers for seasonal H3N2. In the youngest age group, the proportion of patients who seroconverted after vaccination was 12% for H3N2v and 27% for seasonal H3N2.

Discussion

Our results were similar to previous studies that found highest seroprotection among young adults and decreasing titers among older adults. The proportion of 19- to 21-year-olds who seroconverted after seasonal vaccination was low and similar to previous findings. Improving our understanding of H3N2v immunity among different age groups in the United States can help inform vaccination plans if H3N2v becomes more transmissible in the future.  相似文献   

16.

Purpose

The Bedside Index for Severity in Acute Pancreatitis (BISAP) score has been developed to identify patients at high risk for mortality or severe disease early during the course of acute pancreatitis. We aimed to undertake a meta-analysis to quantify the accuracy of BISAP score for predicting mortality and severe acute pancreatitis (SAP).

Materials and Methods

We searched the databases of Pubmed, Embase, and the Cochrane Library to identify studies using the BISAP score to predict mortality or SAP. The pooled sensitivity, specificity, likelihood ratios, and diagnostic odds ratio (DOR) were calculated from each study and were compared with the traditional scoring systems.

Results

Twelve cohorts from 10 studies were included. The overall sensitivity of a BISAP score of ≥3 for mortality was 56% (95% CI, 53%-60%), with a specificity of 91% (95% CI, 90%-91%). The positive and negative likelihood ratios were 5.65 (95% CI, 4.23-7.55) and 0.48 (95% CI, 0.41-0.56), respectively. Regarding the outcome of SAP, the pooled sensitivity was 51% (43%-60%), and the specificity was 91% (89%-92%). The pooled positive and negative likelihood ratios were 7.23 (4.21-12.42) and 0.56 (0.44-0.71), respectively. Compared with BISAP score, the Ranson criteria and APACHEⅡscore showed higher sensitivity and lower specificity for both outcomes.

Conclusions

The BISAP score was a reliable tool to identify AP patients at high risk for unfavorable outcomes. Compared with the Ranson criteria and APACHEⅡscore, BISAP score outperformed in specificity, but having a suboptimal sensitivity for mortality as well as SAP.  相似文献   

17.

Background

Haiti''s cholera epidemic has been devastating partly due to underlying weak infrastructure and limited clean water and sanitation. A comprehensive approach to cholera control is crucial, yet some have argued that oral cholera vaccination (OCV) might result in reduced hygiene practice among recipients. We evaluated the impact of an OCV campaign on knowledge and health practice in rural Haiti.

Methodology/Principal Findings

We administered baseline surveys on knowledge and practice relevant to cholera and waterborne disease to every 10th household during a census in rural Haiti in February 2012 (N = 811). An OCV campaign occurred from May–June 2012 after which we administered identical surveys to 518 households randomly chosen from the same region in September 2012. We compared responses pre- and post-OCV campaign.Post-vaccination, there was improved knowledge with significant increase in percentage of respondents with ≥3 correct responses on cholera transmission mechanisms (odds ratio[OR] 1.91; 95% confidence interval[CI] 1.52–2.40), preventive methods (OR 1.83; 95% CI 1.46–2.30), and water treatment modalities (OR 2.75; 95% CI 2.16–3.50). Relative to pre-vaccination, participants were more likely post-OCV to report always treating water (OR 1.62; 95% CI 1.28–2.05). Respondents were also more likely to report hand washing with soap and water >4 times daily post-vaccine (OR 1.30; 95% CI 1.03–1.64). Knowledge of treating water as a cholera prevention measure was associated with practice of always treating water (OR 1.47; 95% CI 1.14–1.89). Post-vaccination, knowledge was associated with frequent hand washing (OR 2.47; 95% CI 1.35–4.51).

Conclusion

An OCV campaign in rural Haiti was associated with significant improvement in cholera knowledge and practices related to waterborne disease. OCV can be part of comprehensive cholera control and reinforce, not detract from, other control efforts in Haiti.  相似文献   

18.

Background and Purpose

Cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy (CADASIL), caused by mutations in the NOTCH3 gene, is the most common monogenic disorder causing lacunar stroke and cerebral small vessel disease (SVD). Fabry disease (FD) due to mutations in the GLA gene has been suggested as an underdiagnosed cause of stroke, and one feature is SVD. Previous studies reported varying prevalence of CADASIL and FD in stroke, likely due to varying subtypes studied; no studies have looked at a large cohort of younger onset SVD. We determined the prevalence in a well-defined, MRI-verified cohort of apparently sporadic patients with lacunar infarct.

Methods

Caucasian patients with lacunar infarction, aged ≤70 years (mean age 56.7 (SD8.6)), were recruited from 72 specialist stroke centres throughout the UK as part of the Young Lacunar Stroke DNA Resource. Patients with a previously confirmed monogenic cause of stroke were excluded. All MRI’s and clinical histories were reviewed centrally. Screening was performed for NOTCH3 and GLA mutations.

Results

Of 994 subjects five had pathogenic NOTCH3 mutations (R169C, R207C, R587C, C1222G and C323S) all resulting in loss or gain of a cysteine in the NOTCH3 protein. All five patients had confluent leukoaraiosis (Fazekas grade ≥2). CADASIL prevalence overall was 0.5% (95% CI 0.2%-1.1%) and among cases with confluent leukoaraiosis 1.5% (95% CI 0.6%-3.3%). No classic pathogenic FD mutations were found; one patient had a missense mutation (R118C), associated with late-onset FD.

Conclusion

CADASIL cases are rare and only detected in SVD patients with confluent leukoaraiosis. No definite FD cases were detected.  相似文献   

19.

Background

This study explored the relationship between the glycated hemoglobin (HbA1c) level in patients with or without diabetes mellitus and future risks of cardiovascular disease and death.

Methods

Based on a national representative cohort, a total of 5277 participants (7% with diabetes) were selected from Taiwan''s Triple High Survey in 2002. The comorbidities, medication usages, and outcomes of cardiovascular disease and death, were extracted from the Taiwan’s National Health Insurance Research Database and National Death Registry.

Results

After a median follow-up of 9.7 years, participants with diabetes had higher incidence of new onset cardiovascular disease (17.9 versus 3.16 cases per 1000 person-years) and death (20.1 versus 4.96 cases per 1000 person-years) than those without diabetes (all P < 0.001). Diabetes showed increased risk of all-cause death after adjusting for all confounders (adjusted hazard ratio [HR]: 2.29, 95% confidence interval [CI]: 1.52-3.45). Every 1% increment of HbA1c was positively associated with the risk of total cardiovascular disease (HR: 1.2, 95% CI: 1.08-1.34) and the risk of death (HR: 1.14, 95% CI: 1.03-1.26) for all participants. As compared to the reference group with HbA1c below 5.5%, participants with HbA1c levels ≥7.5% had significantly elevated future risks of total cardiovascular disease (HR: 1.82, 95% CI: 1.01-3.26) and all-cause death (HR: 2.45, 95% CI: 1.45-4.14).

Conclusions/Interpretation

Elevated HbA1C levels were associated with increased risks of cardiovascular disease and death, the suboptimal glycemic control with HbA1c level over 7.5% (58.5 mmol/mol) was strongly associated with increased risks of cardiovascular disease and all-cause death.  相似文献   

20.

Background

Although a majority of patients with hypertension require a multidrug therapy, this is rarely considered when measuring adherence from refill data. Moreover, investigating the association between refill non-adherence to antihypertensive therapy (AHT) and elevated blood pressure (BP) has been advocated.

Objective

Identify factors associated with non-adherence to AHT, considering the multidrug therapy, and investigate the association between non-adherence to AHT and elevated BP.

Methods

A retrospective cohort study including patients with hypertension, identified from a random sample of 5025 Swedish adults. Two measures of adherence were estimated by the proportion of days covered method (PDC≥80%): (1) Adherence to any antihypertensive medication and, (2) adherence to the full AHT regimen. Multiple logistic regressions were performed to investigate the association between sociodemographic factors (age, sex, education, income), clinical factors (user profile, number of antihypertensive medications, healthcare use, cardiovascular comorbidities) and non-adherence. Moreover, the association between non-adherence (long-term and a month prior to BP measurement) and elevated BP was investigated.

Results

Non-adherence to any antihypertensive medication was higher among persons < 65 years (Odds Ratio, OR 2.75 [95% CI, 1.18–6.43]) and with the lowest income (OR 2.05 [95% CI, 1.01–4.16]). Non-adherence to the full AHT regimen was higher among new users (OR 2.04 [95% CI, 1.32–3.15]), persons using specialized healthcare (OR 1.63, [95% CI, 1.14–2.32]), and having multiple antihypertensive medications (OR 1.85 [95% CI, 1.25–2.75] and OR 5.22 [95% CI, 3.48–7.83], for 2 and ≥3 antihypertensive medications, respectively). Non-adherence to any antihypertensive medication a month prior to healthcare visit was associated with elevated BP.

Conclusion

Sociodemographic factors were associated with non-adherence to any antihypertensive medication while clinical factors with non-adherence to the full AHT regimen. These differing findings support considering the use of multiple antihypertensive medications when measuring refill adherence. Monitoring patients'' refill adherence prior to healthcare visit may facilitate interpreting elevated BP.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号