首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.

Background:

Falls cause more than 60% of head injuries in older adults. Lack of objective evidence on the circumstances of these events is a barrier to prevention. We analyzed video footage to determine the frequency of and risk factors for head impact during falls in older adults in 2 long-term care facilities.

Methods:

Over 39 months, we captured on video 227 falls involving 133 residents. We used a validated questionnaire to analyze the mechanisms of each fall. We then examined whether the probability for head impact was associated with upper-limb protective responses (hand impact) and fall direction.

Results:

Head impact occurred in 37% of falls, usually onto a vinyl or linoleum floor. Hand impact occurred in 74% of falls but had no significant effect on the probability of head impact (p = 0.3). An increased probability of head impact was associated with a forward initial fall direction, compared with backward falls (odds ratio [OR] 2.7, 95% confidence interval [CI] 1.3–5.9) or sideways falls (OR 2.8, 95% CI 1.2–6.3). In 36% of sideways falls, residents rotated to land backwards, which reduced the probability of head impact (OR 0.2, 95% CI 0.04–0.8).

Interpretation:

Head impact was common in observed falls in older adults living in long-term care facilities, particularly in forward falls. Backward rotation during descent appeared to be protective, but hand impact was not. Attention to upper-limb strength and teaching rotational falling techniques (as in martial arts training) may reduce fall-related head injuries in older adults.Falls from standing height or lower are the cause of more than 60% of hospital admissions for traumatic brain injury in adults older than 65 years.15 Traumatic brain injury accounts for 32% of hospital admissions and more than 50% of deaths from falls in older adults.1,68 Furthermore, the incidence and age-adjusted rate of fall-related traumatic brain injury is increasing,1,9 especially among people older than 80 years, among whom rates have increased threefold over the past 30 years.10 One-quarter of fall-related traumatic brain injuries in older adults occur in long-term care facilities.1The development of improved strategies to prevent fall-related traumatic brain injuries is an important but challenging task. About 60% of residents in long-term care facilities fall at least once per year,11 and falls result from complex interactions of physiologic, environmental and situational factors.1216 Any fall from standing height has sufficient energy to cause brain injury if direct impact occurs between the head and a rigid floor surface.1719 Improved understanding is needed of the factors that separate falls that result in head impact and injury from those that do not.1,10 Falls in young adults rarely result in head impact, owing to protective responses such as use of the upper limbs to stop the fall, trunk flexion and rotation during descent.2023 We have limited evidence of the efficacy of protective responses to falls among older adults.In the current study, we analyzed video footage of real-life falls among older adults to estimate the prevalence of head impact from falls, and to examine the association between head impact, and biomechanical and situational factors.  相似文献   

2.

Background:

Although warfarin has been extensively studied in clinical trials, little is known about rates of hemorrhage attributable to its use in routine clinical practice. Our objective was to examine incident hemorrhagic events in a large population-based cohort of patients with atrial fibrillation who were starting treatment with warfarin.

Methods:

We conducted a population-based cohort study involving residents of Ontario (age ≥ 66 yr) with atrial fibrillation who started taking warfarin between Apr. 1, 1997, and Mar. 31, 2008. We defined a major hemorrhage as any visit to hospital for hemorrage. We determined crude rates of hemorrhage during warfarin treatment, overall and stratified by CHADS2 score (congestive heart failure, hypertension, age ≥ 75 yr, diabetes mellitus and prior stroke, transient ischemic attack or thromboembolism).

Results:

We included 125 195 patients with atrial fibrillation who started treatment with warfarin during the study period. Overall, the rate of hemorrhage was 3.8% (95% confidence interval [CI] 3.8%–3.9%) per person-year. The risk of major hemorrhage was highest during the first 30 days of treatment. During this period, rates of hemorrhage were 11.8% (95% CI 11.1%–12.5%) per person-year in all patients and 16.7% (95% CI 14.3%–19.4%) per person-year among patients with a CHADS2 scores of 4 or greater. Over the 5-year follow-up, 10 840 patients (8.7%) visited the hospital for hemorrhage; of these patients, 1963 (18.1%) died in hospital or within 7 days of being discharged.

Interpretation:

In this large cohort of older patients with atrial fibrillation, we found that rates of hemorrhage are highest within the first 30 days of warfarin therapy. These rates are considerably higher than the rates of 1%–3% reported in randomized controlled trials of warfarin therapy. Our study provides timely estimates of warfarin-related adverse events that may be useful to clinicians, patients and policy-makers as new options for treatment become available.Atrial fibrillation is a major risk factor for stroke and systemic embolism, and strong evidence supports the use of the anticoagulant warfarin to reduce this risk.13 However, warfarin has a narrow therapeutic range and requires regular monitoring of the international normalized ratio to optimize its effectiveness and minimize the risk of hemorrhage.4,5 Although rates of major hemorrhage reported in trials of warfarin therapy typically range between 1% and 3% per person-year,611 observational studies suggest that rates may be considerably higher when warfarin is prescribed outside of a clinical trial setting,1215 approaching 7% per person-year in some studies.1315 The different safety profiles derived from clinical trials and observational data may reflect the careful selection of patients, precise definitions of bleeding and close monitoring in the trial setting. Furthermore, although a few observational studies suggest that hemorrhage rates are higher than generally appreciated, these studies involve small numbers of patients who received care in specialized settings.1416 Consequently, the generalizability of their results to general practice may be limited.More information regarding hemorrhage rates during warfarin therapy is particularly important in light of the recent introduction of new oral anticoagulant agents such as dabigatran, rivaroxaban and apixaban, which may be associated with different outcome profiles.1719 There are currently no large studies offering real-world, population-based estimates of hemorrhage rates among patients taking warfarin, which are needed for future comparisons with new anticoagulant agents once they are widely used in routine clinical practice.20We sought to describe the risk of incident hemorrhage in a large population-based cohort of patients with atrial fibrillation who had recently started warfarin therapy.  相似文献   

3.

Background:

Many clinical trials examine a composite outcome of admission to hospital and death, or infer a relationship between hospital admission and survival benefit. This assumes concordance of the outcomes “hospital admission” and “death.” However, whether the effects of a treatment on hospital admissions and readmissions correlate to its effect on serious outcomes such as death is unknown. We aimed to assess the correlation and concordance of effects of medical interventions on admission rates and mortality.

Methods:

We searched the Cochrane Database of Systematic Reviews from its inception to January 2012 (issue 1, 2012) for systematic reviews of treatment comparisons that included meta-analyses for both admission and mortality outcomes. For each meta-analysis, we synthesized treatment effects on admissions and death, from respective randomized trials reporting those outcomes, using random-effects models. We then measured the concordance of directions of effect sizes and the correlation of summary estimates for the 2 outcomes.

Results:

We identified 61 meta-analyses including 398 trials reporting mortality and 182 trials reporting admission rates; 125 trials reported both outcomes. In 27.9% of comparisons, the point estimates of treatment effects for the 2 outcomes were in opposite directions; in 8.2% of trials, the 95% confidence intervals did not overlap. We found no significant correlation between effect sizes for admission and death (Pearson r = 0.07, p = 0.6). Our results were similar when we limited our analysis to trials reporting both outcomes.

Interpretation:

In this metaepidemiological study, admission and mortality outcomes did not correlate, and discordances occurred in about one-third of the treatment comparisons included in our analyses. Both outcomes convey useful information and should be reported separately, but extrapolating the benefits of admission to survival is unreliable and should be avoided.Health care decisions often rely on effects of interventions described using rates of admission or readmission to hospital.1,2 These outcomes are typically regarded as indicators of insufficient quality of care and inefficient spending of health care resources;1,2 however, whether they can predict other serious clinical outcomes, such as death, is unknown.Although effects on admission or readmission rates are often analyzed using large sets of routinely collected data, such as from administrative databases and electronic health records, many randomized controlled trials (RCTs) also collect data on admission rates, and some RCTs collect mortality data. Moreover, some trials combine death and admission to hospital as the primary composite outcome3 to increase the study’s power to detect significant differences and reduce the required study size.4 However, the interpretation of such a combination is difficult when the treatment effects on the 2 components are not concordant,5 for example, when more patients survive but rates of admission increase. In such cases, composite outcomes may dilute or obscure clinically significant treatment effects on important individual components,4,6 and incomplete disclosure of individual effects may mislead the interpretation of the results.4We investigated systematic reviews of treatment comparisons that included meta-analyses of RCTs assessing effects on both rates of admission and mortality. We used the reported trial data to assess whether effects on admission rates were concordant with effects on mortality or whether it was possible to identify interventions and diseases in which these 2 outcomes would provide differing pictures of the merits of the tested interventions.  相似文献   

4.

Background:

Multimorbidity, the presence of more than 1 long-term disorder, is associated with increased use of health services, but unplanned admissions to hospital may often be undesirable. Furthermore, socioeconomic deprivation and mental health comorbidity may lead to additional unplanned admissions. We examined the association between unplanned admission to hospital and physical multimorbidity, mental health and socioeconomic deprivation.

Methods:

We conducted a retrospective cohort study using data from 180 815 patients aged 20 years and older who were registered with 40 general practices in Scotland. Details of 32 physical and 8 mental health morbidities were extracted from the patients’ electronic health records (as of Apr. 1, 2006) and linked to hospital admission data. We then recorded the occurrence of unplanned or potentially preventable unplanned acute (nonpsychiatric) admissions to hospital in the subsequent 12 months. We used logistic regression models, adjusting for age and sex, to determine associations between unplanned or potentially preventable unplanned admissions to hospital and physical multimorbidity, mental health and socioeconomic deprivation.

Results:

We identified 10 828 (6.0%) patients who had at least 1 unplanned admission to hospital and 2037 (1.1%) patients who had at least 1 potentially preventable unplanned admission to hospital. Both unplanned and potentially preventable unplanned admissions were independently associated with increasing physical multimorbidity (for ≥ 4 v. 0 conditions, odds ratio [OR] 5.87 [95% confidence interval (CI) 5.45–6.32] for unplanned admissions, OR 14.38 [95% CI 11.87–17.43] for potentially preventable unplanned admissions), mental health conditions (for ≥ 1 v. 0 conditions, OR 2.01 [95% CI 1.92–2.09] for unplanned admissions, OR 1.80 [95% CI 1.64–1.97] for potentially preventable unplanned admissions) and socioeconomic deprivation (for most v. least deprived quintile, OR 1.56 [95% CI 1.43–1.70] for unplanned admissions, OR 1.98 [95% CI 1.63–2.41] for potentially preventable unplanned admissions).

Interpretation:

Physical multimorbidity was strongly associated with unplanned admission to hospital, including admissions that were potentially preventable. The risk of admission to hospital was exacerbated by the coexistence of mental health conditions and socioeconomic deprivation.Multimorbidity — usually defined as the presence of more than 1 long-term disorder — is becoming the norm rather than the exception as populations age.1,2 A recent study found that most people older than 65 years of age had multimorbidity, and the mean number of comorbidities per person increased with age;1 however, multimorbidity is not confined to older adults.3Multimorbidity is associated with a range of adverse outcomes. People with multimorbidity have worse physical, social and psychological quality of life4 and increased mortality.5 Mental health conditions often accompany and exacerbate long-term physical conditions, leading to poor health outcomes, reduced quality of life and increased costs.1,6,7 Furthermore, health services are largely organized to provide care for single diseases, particularly in hospitals or under specialist care. Indeed, many aspects of care are poor for patients with multimorbidity.810 This situation may be further aggravated among patients who are socioeconomically disadvantaged, because they often have poorer health and higher health care needs, while also experiencing poorer provision of services, than their more advantaged counterparts.11 A lack of social and personal resources, coupled with multiple stresses, makes coping difficult for these patients,12 and the multiplicity of physical, psychological and social problems means that family physicians sometimes struggle to support patients with multimorbidity in deprived settings.13Multimorbidity is associated with increased use of health services; however, whereas high use of primary and specialist ambulatory care may be seen as an appropriate response to multimorbidity, frequent unplanned admissions to hospital will often be undesirable.14 Unfortunately, there are relatively few large studies that have examined the association between multimorbidity and unplanned hospital admissions.1517 Moreover, such studies did not separately examine physical and mental health morbidity and did not account for the additional effect of socioeconomic deprivation — shortcomings we hope to have addressed. Using linked routine clinical primary care and hospital data, we sought to determine the association between unplanned admissions to hospital and physical multimorbidity, as well as any additional effect of mental health morbidity and socioeconomic deprivation.  相似文献   

5.

Background:

Compression ultrasonography performed serially over a 7-day period is recommended for the diagnosis of deep vein thrombosis in symptomatic pregnant women, but whether this approach is safe is unknown. We evaluated the safety of withholding anticoagulation from pregnant women with suspected deep vein thrombosis following negative serial compression ultrasonography and iliac vein imaging.

Methods:

Consecutive pregnant women who presented with suspected deep vein thrombosis underwent compression ultrasonography and Doppler imaging of the iliac vein of the symptomatic leg(s). Women whose initial test results were negative underwent serial testing on 2 occasions over the next 7 days. Women not diagnosed with deep vein thrombosis were followed for a minimum of 3 months for the development of symptomatic deep vein thrombosis or pulmonary embolism.

Results:

In total, 221 pregnant women presented with suspected deep vein thrombosis. Deep vein thrombosis was diagnosed in 16 (7.2%) women by initial compression ultrasonography and Doppler studies; none were identified as having deep vein thrombosis on serial testing. One patient with normal serial testing had a pulmonary embolism diagnosed 7 weeks later. The overall prevalence of deep vein thrombosis was 7.7% (17/221); of these, 65% (11/17) of cases were isolated to the iliofemoral veins and 12% (2/17) were isolated iliac deep vein thromboses. The incidence of venous thromboembolism during follow-up was 0.49% (95% confidence interval [CI] 0.09%–2.71%). The sensitivity of serial compression ultrasonography with Doppler imaging was 94.1% (95% CI 69.2%–99.7%), the negative predictive value was 99.5% (95% CI 96.9%–100%), and the negative likelihood ratio was 0.068 (95% CI 0.01–0.39).

Interpretation:

Serial compression ultrasonography with Doppler imaging of the iliac vein performed over a 7-day period excludes deep-vein thrombosis in symptomatic pregnant women.Over the last 2 decades, venous compression ultrasonography has become the imaging test of choice for diagnosing deep vein thrombosis in the lower extremities of men and nonpregnant women.14 Although this test is highly sensitive (about 97%) for deep vein thrombosis involving the femoral and popliteal veins, compression ultrasonography is less sensitive for the detection of isolated deep vein thrombosis in the calf.5 Because proximal propagation of isolated calf deep vein thrombosis occurs in about 20% of cases, serial compression ultrasonography performed over a 7-day period is recommended to definitely exclude such thromboses if the results of the initial compression ultrasound are negative.6The use of serial compression ultrasonography in symptomatic men and nonpregnant women has been validated in prospective studies,1,7 suggesting that withholding anticoagulation from symptomatic patients whose serial compression ultrasound results are negative is safe, with less than 2% of patients subsequently being diagnosed with deep vein thrombosis.1,7,8 Although the use of serial compression ultrasonography has not been validated in pregnant women, this strategy is also advocated for symptomatic pregnant women.9The appeal of using compression ultrasonography for diagnosing deep vein thrombosis in pregnant women is obvious: it is noninvasive, widely available and does not expose the fetus to ionizing radiation. However, generalizing results from studies involving men and nonpregnant women to pregnant women is problematic because of differences in clinical presentation and anatomic distribution of deep vein thromboses.10 Compared with men and nonpregnant women, pregnant women more often present with very proximal deep vein thrombosis (including isolated iliac vein deep vein thrombosis); isolated distal calf deep vein thromboses are infrequent.10 In a recent review of the literature, we found that 62% of all deep vein thromboses in symptomatic pregnant women were in the iliofemoral veins, 17% were in the iliac vein alone, and 6% were in the calf veins.10 In contrast, in the general population, more than 80% of deep vein thromboses involved calf veins, and iliofemoral deep vein thromboses or isolated iliac veins are uncommon (< 5%).14Physiologic changes associated with pregnancy might affect blood flow patterns and normal compressibility of the proximal veins, thereby affecting the diagnostic accuracy of compression ultrasonograpy. This technique cannot be used to detect isolated deep vein thromboses in the iliac vein; these veins are not compressible because of their intrapelvic location. Whether Doppler studies are sensitive for detecting deep vein thromboses in these high proximal veins (i.e., iliac veins) has not been well studied, but data suggest that this method of detection compares favourably to compression ultrasonography in men and nonpregnant women for proximal deep vein thromboses.11 The use of Doppler imaging in pregnant women for the purpose of detecting iliac vein deep vein thromboses has been reported in the literature,12,13 but it has not been adequately evaluated.Currently, the standard practice of diagnosing deep vein thrombosis in symptomatic pregnant women is by compression ultrasonography. If the results of the compression ultrasound are negative, Doppler imaging of the iliac vein (with or without vagal manoeuvres) is recommended, particularly for women with a high clinical probability of deep vein thrombosis in the iliac vein.9,12,13 This diagnostic approach is advocated despite the absence of any prospective studies validating its use. In this study, we evaluated the diagnostic accuracy of serial compression ultrasonography and Doppler imaging of the iliac veins over a 7-day period among symptomatic pregnant women.  相似文献   

6.

Background:

Some children feel pain during wound closures using tissue adhesives. We sought to determine whether a topically applied analgesic solution of lidocaine–epinephrine–tetracaine would decrease pain during tissue adhesive repair.

Methods:

We conducted a randomized, placebo-controlled, blinded trial involving 221 children between the ages of 3 months and 17 years. Patients were enrolled between March 2011 and January 2012 when presenting to a tertiary-care pediatric emergency department with lacerations requiring closure with tissue adhesive. Patients received either lidocaine–epinephrine–tetracaine or placebo before undergoing wound closure. Our primary outcome was the pain rating of adhesive application according to the colour Visual Analogue Scale and the Faces Pain Scale — Revised. Our secondary outcomes were physician ratings of difficulty of wound closure and wound hemostasis, in addition to their prediction as to which treatment the patient had received.

Results:

Children who received the analgesic before wound closure reported less pain (median 0.5, interquartile range [IQR] 0.25–1.50) than those who received placebo (median 1.00, IQR 0.38–2.50) as rated using the colour Visual Analogue Scale (p = 0.01) and Faces Pain Scale – Revised (median 0.00, IQR 0.00–2.00, for analgesic v. median 2.00, IQR 0.00–4.00, for placebo, p < 0.01). Patients who received the analgesic were significantly more likely to report having or to appear to have a pain-free procedure (relative risk [RR] of pain 0.54, 95% confidence interval [CI] 0.37–0.80). Complete hemostasis of the wound was also more common among patients who received lidocaine–epinephrine–tetracaine than among those who received placebo (78.2% v. 59.3%, p = 0.008).

Interpretation:

Treating minor lacerations with lidocaine–epinephrine–tetracaine before wound closure with tissue adhesive reduced ratings of pain and increased the proportion of pain-free repairs among children aged 3 months to 17 years. This low-risk intervention may benefit children with lacerations requiring tissue adhesives instead of sutures. Trial registration: ClinicalTrials.gov, no. PR 6138378804.Minor laceration repair with tissue adhesive, or “skin glue,” is common in pediatrics. Although less painful than cutaneous sutures,1 tissue adhesives polymerize through an exothermic reaction that may cause a burning, painful sensation. Pain is dependent on the specific formulation of the adhesive used and the method of application. One study of different tissue adhesives reported 23.8%–40.5% of participants feeling a “burning sensation”,2 whereas another study reported “pain” in 17.6%–44.1% of children.3 The amounts of adhesive applied, method of application and individual patient characteristics can also influence the feeling of pain.3,4 Because tissue adhesives polymerize on contact with moisture,4,5 poor wound hemostasis has the potential to cause premature setting of the adhesive, leading to less efficient and more painful repairs.6Preventing procedural pain is a high priority in pediatric care.7 Inadequate analgesia for pediatric procedures may result in more complicated procedures, increased pain sensitivity with future procedures8 and increased fear and anxiety of medical experiences persisting into adulthood.9 A practical method to prevent pain during laceration repairs with tissue adhesive would have a substantial benefit for children.A topically applied analgesic solution containing lidocaine–epinephrine–tetracaine with vasoconstrictive properties provides safe and effective pain control during wound repair using sutures.10 A survey of pediatric emergency fellowship directors in the United States reported that 76% of respondents use this solution or a similar solution when suturing 3-cm chin lacerations in toddlers.11 However, in a hospital chart review, this solution was used in less than half of tissue adhesive repairs, the remainder receiving either local injection of anesthetic or no pain control.12 Reluctance to use lidocaine–epinephrine–tetracaine with tissue adhesive may be due to the perception that it is not worth the minimum 20-minute wait required for the analgesic to take effect13 or to a lack of awareness that tissue adhesives can cause pain.We sought to investigate whether preapplying lidocaine–epinephrine–tetracaine would decrease pain in children during minor laceration repair using tissue adhesive.  相似文献   

7.

Background:

Results of randomized controlled trials evaluating zinc for the treatment of the common cold are conflicting. We conducted a systematic review and meta-analysis to evaluate the efficacy and safety of zinc for such use.

Methods:

We searched electronic databases and other sources for studies published through to Sept. 30, 2011. We included all randomized controlled trials comparing orally administered zinc with placebo or no treatment. Assessment for study inclusion, data extraction and risk-of-bias analyses were performed in duplicate. We conducted meta-analyses using a random-effects model.

Results:

We included 17 trials involving a total of 2121 participants. Compared with patients given placebo, those receiving zinc had a shorter duration of cold symptoms (mean difference −1.65 days, 95% confidence interval [CI] −2.50 to −0.81); however, heterogeneity was high (I2 = 95%). Zinc shortened the duration of cold symptoms in adults (mean difference −2.63, 95% CI −3.69 to −1.58), but no significant effect was seen among children (mean difference −0.26, 95% CI −0.78 to 0.25). Heterogeneity remained high in all subgroup analyses, including by age, dose of ionized zinc and zinc formulation. The occurrence of any adverse event (risk ratio [RR] 1.24, 95% CI 1.05 to 1.46), bad taste (RR 1.65, 95% CI 1.27 to 2.16) and nausea (RR 1.64, 95% CI 1.19 to 2.27) were more common in the zinc group than in the placebo group.

Interpretation:

The results of our meta-analysis showed that oral zinc formulations may shorten the duration of symptoms of the common cold. However, large high-quality trials are needed before definitive recommendations for clinical practice can be made. Adverse effects were common and should be the point of future study, because a good safety and tolerance profile is essential when treating this generally mild illness.The common cold is a frequent respiratory infection experienced 2 to 4 times a year by adults and up to 8 to 10 times a year by children.13 Colds can be caused by several viruses, of which rhinoviruses are the most common.4 Despite their benign nature, colds can lead to substantial morbidity, absenteeism and lost productivity.57Zinc, which can inhibit rhinovirus replication and has activity against other respiratory viruses such as respiratory syncytial virus,8 is a potential treatment for the common cold. The exact mechanism of zinc’s activity on viruses remains uncertain. Zinc may also reduce the severity of cold symptoms by acting as an astringent on the trigeminal nerve.9,10A recent meta-analysis of randomized controlled trials concluded that zinc was effective at reducing the duration and severity of common cold symptoms.11 However, there was considerable heterogeneity reported for the primary outcome (I2 = 93%), and subgroup analyses to explore between-study variations were not performed. The efficacy of zinc therefore remains uncertain, because it is unknown whether the variability among studies was due to methodologic diversity (i.e., risk of bias and therefore uncertainty in zinc’s efficacy) or differences in study populations or interventions (i.e., zinc dose and formulation).We conducted a systematic review and meta-analysis to evaluate the efficacy and safety of zinc for the treatment of the common cold. We sought to improve upon previous systematic reviews1117 by exploring the heterogeneity with subgroups identified a priori, identifying new trials by instituting a broader search and obtaining additional data from authors.  相似文献   

8.

Background:

Screening for methicillin-resistant Staphylococcus aureus (MRSA) is intended to reduce nosocomial spread by identifying patients colonized by MRSA. Given the widespread use of this screening, we evaluated its potential clinical utility in predicting the resistance of clinical isolates of S. aureus.

Methods:

We conducted a 2-year retrospective cohort study that included patients with documented clinical infection with S. aureus and prior screening for MRSA. We determined test characteristics, including sensitivity and specificity, of screening for predicting the resistance of subsequent S. aureus isolates.

Results:

Of 510 patients included in the study, 53 (10%) had positive results from MRSA screening, and 79 (15%) of infecting isolates were resistant to methicillin. Screening for MRSA predicted methicillin resistance of the infecting isolate with 99% (95% confidence interval [CI] 98%–100%) specificity and 63% (95% CI 52%–74%) sensitivity. When screening swabs were obtained within 48 hours before isolate collection, sensitivity increased to 91% (95% CI 71%–99%) and specificity was 100% (95% CI 97%–100%), yielding a negative likelihood ratio of 0.09 (95% CI 0.01–0.3) and a negative predictive value of 98% (95% CI 95%–100%). The time between swab and isolate collection was a significant predictor of concordance of methicillin resistance in swabs and isolates (odds ratio 6.6, 95% CI 1.6–28.2).

Interpretation:

A positive result from MRSA screening predicted methicillin resistance in a culture-positive clinical infection with S. aureus. Negative results on MRSA screening were most useful for excluding methicillin resistance of a subsequent infection with S. aureus when the screening swab was obtained within 48 hours before collection of the clinical isolate.Antimicrobial resistance is a global problem. The prevalence of resistant bacteria, including methicillin-resistant Staphylococcus aureus (MRSA), has reached high levels in many countries.13 Methicillin resistance in S. aureus is associated with excess mortality, hospital stays and health care costs,3,4 possibly owing to increased virulence or less effective treatments for MRSA compared with methicillin-sensitive S. aureus (MSSA).5The initial selection of appropriate empirical antibiotic treatment affects mortality, morbidity and potential health care expenditures.68 The optimal choice of antibiotics in S. aureus infections is important for 3 major reasons: β-lactam antibiotics have shown improved efficacy over vancomycin and are the ideal treatment for susceptible strains of S. aureus;6 β-lactam antibiotics are ineffective against MRSA, and so vancomycin or other newer agents must be used empirically when MRSA is suspected; and unnecessary use of broad-spectrum antibiotics (e.g., vancomycin) can lead to the development of further antimicrobial resistance.9 It is therefore necessary to make informed decisions regarding selection of empirical antibiotics.1013 Consideration of a patient’s previous colonization status is important, because colonization predates most hospital and community-acquired infections.10,14Universal or targeted surveillance for MRSA has been implemented widely as a means of limiting transmission of this antibiotic-resistant pathogen.15,16 Although results of MRSA screening are not intended to guide empirical treatment, they may offer an additional benefit among patients in whom clinical infection with S. aureus develops.Studies that examined the effects of MRSA carriage on the subsequent likelihood of infection allude to the potential diagnostic benefit of prior screening for MRSA.17,18 Colonization by MRSA at the time of hospital admission is associated with a 13-fold increased risk of subsequent MRSA infection.17,18 Moreover, studies that examined nasal carriage of S. aureus after documented S. aureus bacteremia have shown remarkable concordance between the genotypes of paired colonizing and invasive strains (82%–94%).19,20 The purpose of our study was to identify the usefulness of prior screening for MRSA for predicting methicillin resistance in culture-positive S. aureus infections.  相似文献   

9.

Background:

No primary practice care model has been shown to be superior in achieving high-quality primary care. We aimed to identify the organizational characteristics of primary care practices that provide high-quality primary care.

Methods:

We performed a cross-sectional observational study involving a stratified random sample of 37 primary care practices from 3 regions of Quebec. We recruited 1457 patients who had 1 of 2 chronic care conditions or 1 of 6 episodic care conditions. The main outcome was the overall technical quality score. We measured organizational characteristics by use of a validated questionnaire and the Team Climate Inventory. Statistical analyses were based on multilevel regression modelling.

Results:

The following characteristics were strongly associated with overall technical quality of care score: physician remuneration method (27.0; 95% confidence interval [CI] 19.0–35.0), extent of sharing of administrative resources (7.6; 95% CI 0.8–14.4), presence of allied health professionals (15.3; 95% CI 5.4–25.2) and/or specialist physicians (19.6; 95% CI 8.3–30.9), the presence of mechanisms for maintaining or evaluating competence (7.7; 95% CI 3.0–12.4) and average organizational access to the practice (4.9; 95% CI 2.6–7.2). The number of physicians (1.2; 95% CI 0.6–1.8) and the average Team Climate Inventory score (1.3; 95% CI 0.1–2.5) were modestly associated with high-quality care.

Interpretation:

We identified a common set of organizational characteristics associated with high-quality primary care. Many of these characteristics are amenable to change through practice-level organizational changes.A health care system is only as strong as its primary care sector,1 which provides “entry into the system for all new needs and problems, provides person-focused (not disease-oriented) care over time, provides care for all but very uncommon or unusual conditions …”2 Patient enrolment, team-based care, information technology, and funding and remuneration schemes that foster comprehensiveness and collaboration are key characteristics of effective primary care systems.3 None can be singled out as the most determining, but how they are clustered defines a limited set of organizational models that have been associated with a variety of outcomes.4 Canadian provinces have implemented different primary care models with different scopes of changes.5 Research has not yet identified a “winning” model. For example, in Ontario, community health centres deliver better chronic illness care6 but have less accessibility than fee-for-service enrolment models,7 and no model provided more comprehensive preventive care.8 Walk-in clinics achieved better quality scores than did family medicine clinics for treatment of a set of acute problems.9 How the work is organized may be as important, if not more important, than what the model is called.These observations suggest that the challenges associated with providing high-quality services differ depending on the nature of care considered.911 Even if chronic illness is a major challenge, the quality of care must not be improved at the expense of accessibility, preventive or good episodic care, which are all essential components of primary care.In this study, we report the results of the quantitative component of a multimethod observational study conducted in Quebec to determine which organizational characteristics of primary care practices are associated with high-quality care. We sought to find a quality measure of care that would encompass the comprehensive nature of primary care (episodic, chronic and preventive care), and we explored how the contribution of organizational characteristics varied based on the type of care provided.  相似文献   

10.

Background:

Chronic kidney disease is an important risk factor for death and cardiovascular-related morbidity, but estimates to date of its prevalence in Canada have generally been extrapolated from the prevalence of end-stage renal disease. We used direct measures of kidney function collected from a nationally representative survey population to estimate the prevalence of chronic kidney disease among Canadian adults.

Methods:

We examined data for 3689 adult participants of cycle 1 of the Canadian Health Measures Survey (2007–2009) for the presence of chronic kidney disease. We also calculated the age-standardized prevalence of cardiovascular risk factors by chronic kidney disease group. We cross-tabulated the estimated glomerular filtration rate (eGFR) with albuminuria status.

Results:

The prevalence of chronic kidney disease during the period 2007–2009 was 12.5%, representing about 3 million Canadian adults. The estimated prevalence of stage 3–5 disease was 3.1% (0.73 million adults) and albuminuria 10.3% (2.4 million adults). The prevalence of diabetes, hypertension and hypertriglyceridemia were all significantly higher among adults with chronic kidney disease than among those without it. The prevalence of albuminuria was high, even among those whose eGFR was 90 mL/min per 1.73 m2 or greater (10.1%) and those without diabetes or hypertension (9.3%). Awareness of kidney dysfunction among adults with stage 3–5 chronic kidney disease was low (12.0%).

Interpretation:

The prevalence of kidney dysfunction was substantial in the survey population, including individuals without hypertension or diabetes, conditions most likely to prompt screening for kidney dysfunction. These findings highlight the potential for missed opportunities for early intervention and secondary prevention of chronic kidney disease.Chronic kidney disease is defined as the presence of kidney damage or reduced kidney function for more than 3 months and requires either a measured or estimated glomerular filtration rate (eGFR) of less than 60 mL/min per 1.73 m2, or the presence of abnormalities in urine sediment, renal imaging or biopsy results.1 Between 1.3 million and 2.9 million Canadians are estimated to have chronic kidney disease, based on an extrapolation of the prevalence of end-stage renal disease.2 In the United States, the 1999–2004 National Health and Nutrition Examination Survey reported a prevalence of 5.0% for stage 1 and 2 disease and 8.1% for stage 3 and 4 disease.3,4Chronic kidney disease has been identified as a risk factor for death and cardiovascular-related morbidity and is a substantial burden on the health care system.1,5 Hemodialysis costs the Canadian health care system about $60 000 per patient per year of treatment.1 The increasing prevalence of chronic kidney disease can be attributed in part to the growing elderly population and to increasing rates of diabetes and hypertension.1,6,7Albuminuria, which can result from abnormal vascular permeability, atherosclerosis or renal disease, has gained recognition as an independent risk factor for progressive renal dysfunction and adverse cardiovascular outcomes.810 In earlier stages of chronic kidney disease, albuminuria has been shown to be more predictive of renal and cardiovascular events than eGFR.4,9 This has prompted the call for a new risk stratification for cardiovascular outcomes based on both eGFR and albuminuria.11A recent review advocated screening people for chronic kidney disease if they have hypertension, diabetes, clinically evident cardiovascular disease or a family history of kidney failure or are more than 60 years old.4 The Canadian Society of Nephrology published guidelines on the management of chronic kidney disease but did not offer guidance on screening.1 The Canadian Diabetes Association recommends annual screening with the use of an albumin:creatinine ratio,12 and the Canadian Hypertension Education Program guideline recommends urinalysis as part of the initial assessment of hypertension.13 Screening for chronic kidney disease on the basis of eGFR and albuminuria is not considered to be cost-effective in the general population, among older people or among people with hypertension.14The objective of our study was to use direct measures (biomarkers) of kidney function to generate nationally representative, population-based prevalence estimates of chronic kidney disease among Canadian adults overall and in clinically relevant groups.  相似文献   

11.

Background:

There is an increased risk of venous thromboembolism among women taking oral contraceptives. However, whether there is an additional risk among women with polycystic ovary syndrome (PCOS) is unknown.

Methods:

We developed a population-based cohort from the IMS LifeLink Health Plan Claims Database, which includes managed care organizations in the United States. Women aged 18–46 years taking combined oral contraceptives and who had a claim for PCOS (n = 43 506) were matched, based on a propensity score, to control women (n = 43 506) taking oral contraceptives. Venous thromboembolism was defined using administrative coding and use of anticoagulation. We used Cox proportional hazards models to assess the relative risk (RR) of venous thromboembolism among users of combined oral contraceptives with and without PCOS.

Results:

The incidence of venous thromboembolism among women with PCOS was 23.7/10 000 person-years, while that for matched controls was 10.9/10 000 person-years. Women with PCOS taking combined oral contraceptives had an RR for venous thromboembolism of 2.14 (95% confidence interval [CI] 1.41–3.24) compared with other contraceptive users. The incidence of venous thromboembolism was 6.3/10 000 person-years among women with PCOS not taking oral contraceptives; the incidence was 4.1/10 000 person-years among matched controls. The RR of venous thromboembolism among women with PCOS not taking oral contraceptives was 1.55 (95% CI 1.10–2.19).

Interpretation:

We found a 2-fold increased risk of venous thromboembolism among women with PCOS who were taking combined oral contraceptives and a 1.5-fold increased risk among women with PCOS not taking oral contraceptives. Physicians should consider the increased risk of venous thromboembolism when prescribing contraceptive therapy to women with PCOS.Polycystic ovary syndrome (PCOS) is the most common endocrine disorder among women of reproductive age. The National Institutes of Health criteria estimates its prevalence in the United States to be between 6% and 10%, while the Rotterdam criteria estimates the prevalence to be as high as 15%.1 Although its cause is not entirely known, the diagnostic criteria include oligo- or anovulation, clinical and/or biochemical signs of hyperandrogenism, and polycystic ovaries.2 Women often present with clinical manifestations of high androgen levels, including facial hair growth (hirsutism), acne vulgaris and hair loss on the scalp. Previous studies reported the prevalence of impaired glucose tolerance to be 31.1%–35.2% and the prevalence of type 2 diabetes to be 7.5%–9.8% among women with PCOS.3,4 A recent consensus workshop reported that the prevalence of several known risk factors for cardiovascular disease (hypertension, diabetes, abdominal obesity, psychological factors, smoking, altered apoA1/ApoB ratios) are doubled among women with PCOS compared with matched controls.1,5Combined oral contraceptives are the mainstay treatment for PCOS. However, they are also known to elevate the risk of venous thromboembolism and cardiovascular disease.6 To date, contraceptive studies involving women with PCOS have focused mainly on efficacy, evaluating the effect of combined oral contraceptives on the reduction of hirsutism and hyperandrogenism.7,8 Two studies assessed the metabolic effects of combined oral contraceptives in PCOS, but these studies had small sample sizes and could not evaluate for cardiovascular events.9,10Although women with PCOS have an increase in both cardiovascular risk factors and subclinical cardiovascular disease,11 recent guidelines have concluded there are no data in the literature assessing the association between the use of oral contraceptives and cardiovascular disease among women with PCOS.2 Because combined oral contraceptives are the mainstay treatment, our objective was to determine whether women with PCOS taking combined oral contraceptives have a greater risk of venous thromboembolism compared with other contraceptive users. We also examined whether women with PCOS not taking oral contraceptives had an increased risk of venous thromboembolism compared with the general population.  相似文献   

12.
13.

Background:

There have been postmarketing reports of adverse cardiovascular events associated with the use of varenicline, a widely used smoking cessation drug. We conducted a systematic review and meta-analysis of randomized controlled trials to ascertain the serious adverse cardiovascular effects of varenicline compared with placebo among tobacco users.

Methods:

We searched MEDLINE, EMBASE, the Cochrane Database of Systematic Reviews, websites of regulatory authorities and registries of clinical trials, with no date or language restrictions, through September 2010 (updated March 2011) for published and unpublished studies. We selected double-blind randomized controlled trials of at least one week’s duration involving smokers or people who used smokeless tobacco that reported on cardiovascular events (ischemia, arrhythmia, congestive heart failure, sudden death or cardiovascular-related death) as serious adverse events asociated with the use of varenicline.

Results:

We analyzed data from 14 double-blind randomized controlled trials involving 8216 participants. The trials ranged in duration from 7 to 52 weeks. Varenicline was associated with a significantly increased risk of serious adverse cardiovascular events compared with placebo (1.06% [52/4908] in varenicline group v. 0.82% [27/3308] in placebo group; Peto odds ratio [OR] 1.72, 95% confidence interval [CI] 1.09–2.71; I2 = 0%). The results of various sensitivity analyses were consistent with those of the main analysis, and a funnel plot showed no publication bias. There were too few deaths to allow meaningful comparisons of mortality.

Interpretation:

Our meta-analysis raises safety concerns about the potential for an increased risk of serious adverse cardiovascular events associated with the use of varenicline among tobacco users.Varenicline is one of the most widely used drugs for smoking cessation. It is a partial agonist at the α4–β2 nicotinic acetylcholine receptors and a full agonist at the α7 nicotinic acetylcholine receptor.1,2 The drug modulates parasympathetic output from the brainstem to the heart because of activities of the α7 receptor.3 Acute nicotine administration can induce thrombosis.4 Possible mechanisms by which varenicline may be associated with cardiovascular disease might include the action of varenicline at the α7 receptor in the brainstem or, similar to nicotine, a prothrombotic effect.24At the time of its priority safety review of varenicline in 2006, the US Food and Drug Administration (FDA) noted that “[t]he serious adverse event data suggest that varenicline may possibly increase the risk of cardiac events, both ischemic and arrhythmic, particularly over longer treatment period.”5 Subsequently, the product label was updated: “Post marketing reports of myocardial infarction and cerebrovascular accidents including ischemic and hemorrhagic events have been reported in patients taking Chantix.”6 There are published reports of cardiac arrest associated with varenicline.7Cardiovascular disease is an important cause of morbidity and mortality among tobacco users. The long-term cardiovascular benefits of smoking cessation are well established.8 Although one statistically underpowered trial reported a trend toward excess cardiovascular events associated with the use of varenicline,9 a systematic review of information on the cardiovascular effects of varenicline is unavailable to clinicians.We conducted a systematic review and meta-analysis of randomized controlled trials (RCTs) to ascertain the serious adverse cardiovascular effects of varenicline compared with placebo among tobacco users.  相似文献   

14.

Background:

Understanding the health care experience of people with dementia and their caregivers is becoming increasingly important given the growing number of affected individuals. We conducted a systematic review of qualitative studies that examined aspects of the health care experience of people with dementia and their caregivers to better understand ways to improve care for this population.

Methods:

We searched the electronic databases MEDLINE, Embase, PsychINFO and CINAHL to identify relevant articles. We extracted key study characteristics and methods from the included studies. We also extracted direct quotes from the primary studies, along with the interpretations provided by authors of the studies. We used meta-ethnography to synthesize the extracted information into an overall framework. We evaluated the quality of the primary studies using the Consolidated Criteria for Reporting Qualitative Research (COREQ) checklist.

Results:

In total, 46 studies met our inclusion criteria; these involved 1866 people with dementia and their caregivers. We identified 5 major themes: seeking a diagnosis; accessing supports and services; addressing information needs; disease management; and communication and attitudes of health care providers. We conceptualized the health care experience as progressing through phases of seeking understanding and information, identifying the problem, role transitions following diagnosis and living with change.

Interpretation:

The health care experience of people with dementia and their caregivers is a complex and dynamic process, which could be improved for many people. Understanding these experiences provides insight into potential gaps in existing health services. Modifying existing services or implementing new models of care to address these gaps may lead to improved outcomes for people with dementia and their caregivers.The global prevalence of Alzheimer disease and related dementias is estimated to be 36 million people and is expected to double in the next 20 years.1 Several recent strategies for providing care to patients with dementia have highlighted the importance of coordinated health care services for this growing population.25 Gaps in the quality of care for people with dementia have been identified,68 and improving their quality of care and health care experience has been identified as a priority area.25Incorporating the health care experience of patients and caregivers in health service planning is important to ensure that their needs are met and that person-centred care is provided.9 The health care experience of people with dementia and their caregivers provides valuable information about preferences for services and service delivery.10 Matching available services to patient treatment preferences leads to improved patient outcomes11,12 and satisfaction without increasing costs.13 Qualitative research is ideally suited to exploring the experiences and perspectives of patients and caregivers and has been used to examine these experiences for other conditions.14 We performed a systematic review and meta-ethnographic synthesis of qualitative studies exploring the health care experience of people with dementia and their caregivers in primary care settings, and we propose a conceptual framework for understanding and improving these health care experiences.  相似文献   

15.
Gronich N  Lavi I  Rennert G 《CMAJ》2011,183(18):E1319-E1325

Background:

Combined oral contraceptives are a common method of contraception, but they carry a risk of venous and arterial thrombosis. We assessed whether use of drospirenone was associated with an increase in thrombotic risk relative to third-generation combined oral contraceptives.

Methods:

Using computerized records of the largest health care provider in Israel, we identified all women aged 12 to 50 years for whom combined oral contraceptives had been dispensed between Jan. 1, 2002, and Dec. 31, 2008. We followed the cohort until 2009. We used Poisson regression models to estimate the crude and adjusted rate ratios for risk factors for venous thrombotic events (specifically deep vein thrombosis and pulmonary embolism) and arterial thromboic events (specifically transient ischemic attack and cerebrovascular accident). We performed multivariable analyses to compare types of contraceptives, with adjustment for the various risk factors.

Results:

We identified a total of 1017 (0.24%) venous and arterial thrombotic events among 431 223 use episodes during 819 749 woman-years of follow-up (6.33 venous events and 6.10 arterial events per 10 000 woman-years). In a multivariable model, use of drospirenone carried an increased risk of venous thrombotic events, relative to both third-generation combined oral contraceptives (rate ratio [RR] 1.43, 95% confidence interval [CI] 1.15–1.78) and second-generation combined oral contraceptives (RR 1.65, 95% CI 1.02–2.65). There was no increase in the risk of arterial thrombosis with drospirenone.

Interpretation:

Use of drospirenone-containing oral contraceptives was associated with an increased risk of deep vein thrombosis and pulmonary embolism, but not transient ischemic attack or cerebrovascular attack, relative to second- and third-generation combined oral contraceptives.Oral hormonal therapy is the preferred method of contraception, especially among young women. In the United States in 2002, 12 million women were using “the pill.”1 In a survey of households in Great Britain conducted in 2005 and 2006, one-quarter of women aged 16 to 49 years of age were using this form of contraception.2 A large variety of combined oral contraceptive preparations are available, differing in terms of estrogen dose and in terms of the dose and type of the progestin component. Among preparations currently in use, the estrogen dose ranges from 15 to 35 μg, and the progestins are second-generation, third-generation or newer. The second-generation progestins (levonorgestrel and norgestrel), which are derivatives of testosterone, have differing degrees of androgenic and estrogenic activities. The structure of these agents was modified to reduce the androgenic activity, thus producing the third-generation progestins (desogestrel, gestodene and norgestimate). Newer progestins are chlormadinone acetate, a derivative of progesterone, and drospirenone, an analogue of the aldosterone antagonist spironolactone having antimineralo-corticoid and antiandrogenic activities. Drospirenone is promoted as causing less weight gain and edema than other forms of oral contraceptives, but few well-designed studies have compared the minor adverse effects of these drugs.3The use of oral contraceptives has been reported to confer an increased risk of venous and arterial thrombotic events,47 specifically an absolute risk of venous thrombosis of 6.29 per 10 000 woman-years, compared with 3.01 per 10 000 woman-years among nonusers.8 It has long been accepted that there is a dose–response relationship between estrogen and the risk of venous thrombotic events. Reducing the estrogen dose from 50 μg to 20–30 μg has reduced the risk.9 Studies published since the mid-1990s have suggested a greater risk of venous thrombotic events with third-generation oral contraceptives than with second-generation formulations,1013 indicating that the risk is also progestin-dependent. The pathophysiological mechanism of the risk with different progestins is unknown. A twofold increase in the risk of arterial events (specifically ischemic stroke6,14 and myocardial infarction7) has been observed in case–control studies for users of second-generation pills and possibly also third-generation preparations.7,14Conflicting information is available regarding the risk of venous and arterial thrombotic events associated with drospirenone. An increased risk of venous thromboembolism, relative to second-generation pills, has been reported recently,8,15,16 whereas two manufacturer-sponsored studies claimed no increase in risk.17,18 In the study reported here, we investigated the risk of venous and arterial thrombotic events among users of various oral contraceptives in a large population-based cohort.  相似文献   

16.

Background:

The importance of chronic inflammation as a determinant of aging phenotypes may have been underestimated in previous studies that used a single measurement of inflammatory markers. We assessed inflammatory markers twice over a 5-year exposure period to examine the association between chronic inflammation and future aging phenotypes in a large population of men and women.

Methods:

We obtained data for 3044 middle-aged adults (28.2% women) who were participating in the Whitehall II study and had no history of stroke, myocardial infarction or cancer at our study’s baseline (1997–1999). Interleukin-6 was measured at baseline and 5 years earlier. Cause-specific mortality, chronic disease and functioning were ascertained from hospital data, register linkage and clinical examinations. We used these data to create 4 aging phenotypes at the 10-year follow-up (2007–2009): successful aging (free of major chronic disease and with optimal physical, mental and cognitive functioning), incident fatal or nonfatal cardiovascular disease, death from noncardiovascular causes and normal aging (all other participants).

Results:

Of the 3044 participants, 721 (23.7%) met the criteria for successful aging at the 10-year follow-up, 321 (10.6%) had cardiovascular disease events, 147 (4.8%) died from noncardiovascular causes, and the remaining 1855 (60.9%) were included in the normal aging phenotype. After adjustment for potential confounders, having a high interleukin-6 level (> 2.0 ng/L) twice over the 5-year exposure period nearly halved the odds of successful aging at the 10-year follow-up (odds ratio [OR] 0.53, 95% confidence interval [CI] 0.38–0.74) and increased the risk of future cardiovascular events (OR 1.64, 95% CI 1.15–2.33) and noncardiovascular death (OR 2.43, 95% CI 1.58–3.80).

Interpretation:

Chronic inflammation, as ascertained by repeat measurements, was associated with a range of unhealthy aging phenotypes and a decreased likelihood of successful aging. Our results suggest that assessing long-term chronic inflammation by repeat measurement of interleukin-6 has the potential to guide clinical practice. Chronic inflammation has been implicated in the pathogenesis of age-related conditions, 1 such as type 2 diabetes, 2 , 3 cardiovascular disease, 4 cognitive impairment 5 and brain atrophy. 6 Chronic inflammation may result from or be a cause of age-related disease processes (illustrated in Appendix 1, available at www.cmaj.ca/lookup/suppl/doi:10.1503/cmaj.122072/-/DC1 ). For example, obesity increases inflammation, and chronic inflammation, in turn, contributes to the development of type 2 diabetes by inducing insulin resistance, 7 , 8 and to coronary artery disease by promoting atherogenesis. 9 Thus, raised levels of inflammation appear to be implicated in various pathological processes leading to diseases in older age. Of the various markers of systemic inflammation, interleukin-6 is particularly relevant to aging outcomes. There is increasing evidence that interleukin-6 is the pro-inflammatory cytokine that “drives” downstream inflammatory markers, such as C-reactive protein and fibrinogen. 10 , 11 Interleukin-6, in contrast to C-reactive protein and fibrinogen, is also likely to play a causal role in aging owing to its direct effects on the brain and skeletal muscles. 12 , 13 In addition, results of Mendelian randomization studies of interleukin-6 and studies of antagonists are consistent with a causal role for interleukin-6 in relation to coronary artery disease, again in contrast to C-reactive protein and fibrinogen. 14 However, current understanding of the link between chronic inflammation and aging phenotypes is hampered by the methodologic limitations of many existing studies. Most studies reported an assessment of inflammation based on a single measurement, precluding a distinction between the short-term (acute) and longer-term (chronic) impact of the inflammatory process on disease outcomes. 7 We conducted a study using 2 measurements of interleukin-6 obtained about 5 years apart to examine the association between chronic inflammation and aging phenotypes assessed 10 years later in a large population of men and women. Because inflammation characterizes a wide range of pathological processes, we considered several aging phenotypes, including cardiovascular disease (fatal and nonfatal), death from noncardiovascular causes and successful aging (optimal functioning across different physical, mental and cognitive domains).  相似文献   

17.

Background:

Anemia is an important public health and clinical problem. Observational studies have linked iron deficiency and anemia in children with many poor outcomes, including impaired cognitive development; however, iron supplementation, a widely used preventive and therapeutic strategy, is associated with adverse effects. Primary-school–aged children are at a critical stage in intellectual development, and optimization of their cognitive performance could have long-lasting individual and population benefits. In this study, we summarize the evidence for the benefits and safety of daily iron supplementation in primary-school–aged children.

Methods:

We searched electronic databases (including MEDLINE and Embase) and other sources (July 2013) for randomized and quasi-randomized controlled trials involving daily iron supplementation in children aged 5–12 years. We combined the data using random effects meta-analysis.

Results:

We identified 16 501 studies; of these, we evaluated 76 full-text papers and included 32 studies including 7089 children. Of the included studies, 31 were conducted in low- or middle-income settings. Iron supplementation improved global cognitive scores (standardized mean difference 0.50, 95% confidence interval [CI] 0.11 to 0.90, p = 0.01), intelligence quotient among anemic children (mean difference 4.55, 95% CI 0.16 to 8.94, p = 0.04) and measures of attention and concentration. Iron supplementation also improved age-adjusted height among all children and age-adjusted weight among anemic children. Iron supplementation reduced the risk of anemia by 50% and the risk of iron deficiency by 79%. Adherence in the trial settings was generally high. Safety data were limited.

Interpretation:

Our analysis suggests that iron supplementation safely improves hematologic and nonhematologic outcomes among primary-school–aged children in low- or middle-income settings and is well-tolerated.An estimated 25% of school-aged children worldwide are anemic.1 Iron deficiency is thought to account for about half of the global cases of anemia2 and is associated with inadequate dietary iron and, in developing settings, hookworm and schistosomiasis.3 In developed settings, anemia is prevalent among disadvantaged populations, including newly arrived refugees, indigenous people4 and some ethnic groups (e.g., Hispanic people in the United States).5,6 About 3% of primary-school–aged children in Canada are anemic.7 Programs to address anemia are constrained by concerns that iron supplements cause adverse effects, including an increased risk of infections such as malaria in endemic areas.8In observational studies, iron deficiency has been associated with impaired cognitive and physical development. It has been estimated that each 10 g/L decrement in hemoglobin reduces future intelligence quotient (IQ) by 1.73 points.9 However, observational data are susceptible to confounding,10 and a causal relation between iron deficiency and cognitive impairment has not been confirmed.11 Randomized controlled trials should overcome confounding, but results of trials examining this question have not agreed.Optimizing cognitive and physical development in primary-school–aged children could have life-long benefits.12 However, anemia-control recommendations must balance safety and efficacy. We performed a systematic review of the effects of daily iron supplementation, a commonly used strategy to combat anemia,2 in primary-school–aged children. We examined cognitive, growth and hematologic outcomes and adverse effects across all settings.  相似文献   

18.

Background:

Hospital mortality has decreased over time for critically ill patients with various forms of brain injury. We hypothesized that the proportion of patients who progress to neurologic death may have also decreased.

Methods:

We performed a prospective cohort study involving consecutive adult patients with traumatic brain injury, subarachnoid hemorrhage, intracerebral hemorrhage or anoxic brain injury admitted to regional intensive care units in southern Alberta over a 10.5-year period. We used multivariable logistic regression to adjust for patient age and score on the Glasgow Coma Scale at admission, and to assess whether the proportion of patients who progress to neurologic death has changed over time.

Results:

The cohort consisted of 2788 patients. The proportion of patients who progressed to neurologic death was 8.1% at the start of the study period, and the adjusted odds of progressing to neurologic death decreased over the study period (odds ratio [OR] per yr 0.92, 95% confidence interval [CI] 0.87–0.98, p = 0.006). This change was most pronounced among patients with traumatic brain injury (OR per yr 0.87, 95% CI 0.78–0.96, p = 0.005); there was no change among patients with anoxic injury (OR per yr 0.96, 95% CI 0.85–1.09, p = 0.6). A review of the medical records suggests that missed cases of neurologic death were rare (≤ 0.5% of deaths).

Interpretation:

The proportion of patients with brain injury who progress to neurologic death has decreased over time, especially among those with head trauma. This finding may reflect positive developments in the prevention and care of brain injury. However, organ donation after neurologic death represents the major source of organs for transplantation. Thus, these findings may help explain the relatively stagnant rates of deceased organ donation in some regions of Canada, which in turn has important implications for the care of patients with end-stage organ failure.Mortality has decreased among critically ill patients with various forms of brain injury in Canada and around the world.110 There have also been changes in the incidence of stroke and the rate of admission to hospital for traumatic brain injury, especially among younger people and those whose injuries are related to motor vehicle or bicycle crashes.5,6,1013Some countries have noted a possible decline in the total number of patients with neurologic death.14,15 Neurologic death (“brain death”) may occur when patients with brain injury experience progressive cerebral edema, complicated by transtentorial herniation. It is defined by the irreversible cessation of cerebral and brainstem functions, including respiration.16 Circulation and gas exchange persist only because of the use of mechanical ventilation. National guidelines exist for the diagnosis of neurologic death.17,18 We hypothesized that the proportion of patients with acute brain injury who progress to neurologic death may have decreased over time.  相似文献   

19.

Background:

The gut microbiota is essential to human health throughout life, yet the acquisition and development of this microbial community during infancy remains poorly understood. Meanwhile, there is increasing concern over rising rates of cesarean delivery and insufficient exclusive breastfeeding of infants in developed countries. In this article, we characterize the gut microbiota of healthy Canadian infants and describe the influence of cesarean delivery and formula feeding.

Methods:

We included a subset of 24 term infants from the Canadian Healthy Infant Longitudinal Development (CHILD) birth cohort. Mode of delivery was obtained from medical records, and mothers were asked to report on infant diet and medication use. Fecal samples were collected at 4 months of age, and we characterized the microbiota composition using high-throughput DNA sequencing.

Results:

We observed high variability in the profiles of fecal microbiota among the infants. The profiles were generally dominated by Actinobacteria (mainly the genus Bifidobacterium) and Firmicutes (with diverse representation from numerous genera). Compared with breastfed infants, formula-fed infants had increased richness of species, with overrepresentation of Clostridium difficile. Escherichia–Shigella and Bacteroides species were underrepresented in infants born by cesarean delivery. Infants born by elective cesarean delivery had particularly low bacterial richness and diversity.

Interpretation:

These findings advance our understanding of the gut microbiota in healthy infants. They also provide new evidence for the effects of delivery mode and infant diet as determinants of this essential microbial community in early life.The human body harbours trillions of microbes, known collectively as the “human microbiome.” By far the highest density of commensal bacteria is found in the digestive tract, where resident microbes outnumber host cells by at least 10 to 1. Gut bacteria play a fundamental role in human health by promoting intestinal homeostasis, stimulating development of the immune system, providing protection against pathogens, and contributing to the processing of nutrients and harvesting of energy.1,2 The disruption of the gut microbiota has been linked to an increasing number of diseases, including inflammatory bowel disease, necrotizing enterocolitis, diabetes, obesity, cancer, allergies and asthma.1 Despite this evidence and a growing appreciation for the integral role of the gut microbiota in lifelong health, relatively little is known about the acquisition and development of this complex microbial community during infancy.3Two of the best-studied determinants of the gut microbiota during infancy are mode of delivery and exposure to breast milk.4,5 Cesarean delivery perturbs normal colonization of the infant gut by preventing exposure to maternal microbes, whereas breastfeeding promotes a “healthy” gut microbiota by providing selective metabolic substrates for beneficial bacteria.3,5 Despite recommendations from the World Health Organization,6 the rate of cesarean delivery has continued to rise in developed countries and rates of breastfeeding decrease substantially within the first few months of life.7,8 In Canada, more than 1 in 4 newborns are born by cesarean delivery, and less than 15% of infants are exclusively breastfed for the recommended duration of 6 months.9,10 In some parts of the world, elective cesarean deliveries are performed by maternal request, often because of apprehension about pain during childbirth, and sometimes for patient–physician convenience.11The potential long-term consequences of decisions regarding mode of delivery and infant diet are not to be underestimated. Infants born by cesarean delivery are at increased risk of asthma, obesity and type 1 diabetes,12 whereas breastfeeding is variably protective against these and other disorders.13 These long-term health consequences may be partially attributable to disruption of the gut microbiota.12,14Historically, the gut microbiota has been studied with the use of culture-based methodologies to examine individual organisms. However, up to 80% of intestinal microbes cannot be grown in culture.3,15 New technology using culture-independent DNA sequencing enables comprehensive detection of intestinal microbes and permits simultaneous characterization of entire microbial communities. Multinational consortia have been established to characterize the “normal” adult microbiome using these exciting new methods;16 however, these methods have been underused in infant studies. Because early colonization may have long-lasting effects on health, infant studies are vital.3,4 Among the few studies of infant gut microbiota using DNA sequencing, most were conducted in restricted populations, such as infants delivered vaginally,17 infants born by cesarean delivery who were formula-fed18 or preterm infants with necrotizing enterocolitis.19Thus, the gut microbiota is essential to human health, yet the acquisition and development of this microbial community during infancy remains poorly understood.3 In the current study, we address this gap in knowledge using new sequencing technology and detailed exposure assessments20 of healthy Canadian infants selected from a national birth cohort to provide representative, comprehensive profiles of gut microbiota according to mode of delivery and infant diet.  相似文献   

20.
Robin Skinner  Steven McFaull 《CMAJ》2012,184(9):1029-1034

Background:

Suicide is the second leading cause of death for young Canadians (10–19 years of age) — a disturbing trend that has shown little improvement in recent years. Our objective was to examine suicide trends among Canadian children and adolescents.

Methods:

We conducted a retrospective analysis of standardized suicide rates using Statistics Canada mortality data for the period spanning from 1980 to 2008. We analyzed the data by sex and by suicide method over time for two age groups: 10–14 year olds (children) and 15–19 year olds (adolescents). We quantified annual trends by calculating the average annual percent change (AAPC).

Results:

We found an average annual decrease of 1.0% (95% confidence interval [CI] −1.5 to −0.4) in the suicide rate for children and adolescents, but stratification by age and sex showed significant variation. We saw an increase in suicide by suffocation among female children (AAPC = 8.1%, 95% CI 6.0 to 10.4) and adolescents (AAPC = 8.0%, 95% CI 6.2 to 9.8). In addition, we noted a decrease in suicides involving poisoning and firearms during the study period.

Interpretation:

Our results show that suicide rates in Canada are increasing among female children and adolescents and decreasing among male children and adolescents. Limiting access to lethal means has some potential to mitigate risk. However, suffocation, which has become the predominant method for committing suicide for these age groups, is not amenable to this type of primary prevention.Suicide was ranked as the second leading cause of death among Canadians aged 10–34 years in 2008.1 It is recognized that suicidal behaviour and ideation is an important public health issue among children and adolescents; disturbingly, suicide is a leading cause of Canadian childhood mortality (i.e., among youths aged 10–19 years).2,3Between 1980 and 2008, there were substantial improvements in mortality attributable to unintentional injury among 10–19 year olds, with rates decreasing from 37.7 per 100 000 to 10.7 per 100 000; suicide rates, however, showed less improvement, with only a small reduction during the same period (from 6.2 per 100 000 in 1980 to 5.2 per 100 000 in 2008).1Previous studies that looked at suicides among Canadian adolescents and young adults (i.e., people aged 15–25 years) have reported rates as being generally stable over time, but with a marked increase in suicides by suffocation and a decrease in those involving firearms.2 There is limited literature on self-inflicted injuries among children 10–14 years of age in Canada and the United States, but there appears to be a trend toward younger children starting to self-harm.3,4 Furthermore, the trend of suicide by suffocation moving to younger ages may be partly due to cases of the “choking game” (self-strangulation without intent to cause permanent harm) that have been misclassified as suicides.57Risk factors for suicidal behaviour and ideation in young people include a psychiatric diagnosis (e.g., depression), substance abuse, past suicidal behaviour, family factors and other life stressors (e.g., relationships, bullying) that have complex interactions.8 A suicide attempt involves specific intent, plans and availability of lethal means, such as firearms,9 elevated structures10 or substances.11 The existence of “pro-suicide” sites on the Internet and in social media12 may further increase risk by providing details of various ways to commit suicide, as well as evaluations ranking these methods by effectiveness, amount of pain involved and length of time to produce death.1315Our primary objective was to present the patterns of suicide among children and adolescents (aged 10–19 years) in Canada.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号