首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 312 毫秒
1.

Background

Readmissions to hospital are increasingly being used as an indicator of quality of care. However, this approach is valid only when we know what proportion of readmissions are avoidable. We conducted a systematic review of studies that measured the proportion of readmissions deemed avoidable. We examined how such readmissions were measured and estimated their prevalence.

Methods

We searched the MEDLINE and EMBASE databases to identify all studies published from 1966 to July 2010 that reviewed hospital readmissions and that specified how many were classified as avoidable.

Results

Our search strategy identified 34 studies. Three of the studies used combinations of administrative diagnostic codes to determine whether readmissions were avoidable. Criteria used in the remaining studies were subjective. Most of the studies were conducted at single teaching hospitals, did not consider information from the community or treating physicians, and used only one reviewer to decide whether readmissions were avoidable. The median proportion of readmissions deemed avoidable was 27.1% but varied from 5% to 79%. Three study-level factors (teaching status of hospital, whether all diagnoses or only some were considered, and length of follow-up) were significantly associated with the proportion of admissions deemed to be avoidable and explained some, but not all, of the heterogeneity between the studies.

Interpretation

All but three of the studies used subjective criteria to determine whether readmissions were avoidable. Study methods had notable deficits and varied extensively, as did the proportion of readmissions deemed avoidable. The true proportion of hospital readmissions that are potentially avoidable remains unclear.In most instances, unplanned readmissions to hospital indicate bad health outcomes for patients. Sometimes they are due to a medical error or the provision of suboptimal patient care. Other times, they are unavoidable because they are due to the development of new conditions or the deterioration of refractory, severe chronic conditions.Hospital readmissions are frequently used to gauge patient care. Many organizations use them as a metric for institutional or regional quality of care.1 The widespread public reporting of hospital readmissions and their use in considerations for funding implicitly suggest a belief that readmissions indicate the quality of care provided by particular physicians and institutions.The validity of hospital readmissions as an indicator of quality of care depends on the extent that readmissions are avoidable. As the proportion of readmissions deemed to be avoidable decreases, the effort and expense required to avoid one readmission will increase. This decrease in avoidable admissions will also dilute the relation between the overall readmission rate and quality of care. Therefore, it is important to know the proportion of hospital readmissions that are avoidable.We conducted a systematic review of studies that measured the proportion of readmissions that were avoidable. We examined how such readmissions were measured and estimated their prevalence.  相似文献   

2.

Background:

Hospital readmissions are important patient outcomes that can be accurately captured with routinely collected administrative data. Hospital-specific readmission rates have been reported as a quality-of-care indicator. However, the extent to which these measures vary with different calculation methods is uncertain.

Methods:

We identified all discharges from Ontario hospitals from 2005 to 2010 and determined whether patients died or were urgently readmitted within 30 days. For each hospital, we calculated 4 distinct observed-to-expected ratios, estimating the expected number of events using different adjustments for confounders (age and sex v. complete) and different units of analysis (all admissions v. single admission per patient).

Results:

We included 3 148 648 admissions to hospital for 1 802 704 patients in 162 hospitals. Ratios adjusted for age and sex alone had the greatest variation. Within hospitals, ranges of the 4 ratios averaged 31% of the overall estimate. Readmission ratios adjusted for age and sex showed the lowest correlation (Spearman correlation coefficient 0.48–0.68). Hospital rankings based on the different measures had an average range of 47.4 (standard deviation 32.2) out of 162.

Interpretation:

We found notable variation in rates of death or urgent readmission within 30 days based on the extent of adjustment for confounders and the unit of analysis. Slight changes in the methods used to calculate hospital-specific readmission rates influence their values and the consequent rankings of hospitals. Our results highlight the caution required when comparing hospital performance using rates of death or urgent readmission within 30 days.Readmission rates are used to gauge and compare hospital performance and have been reported around the world.14 These rates create great public interest and concern regarding the local quality of health care. A recently created Canadian website reporting indicators including readmission rates crashed when it experienced 15 times more hits that expected.5,6 Policy-makers in some jurisdictions have implemented programs linking readmission rates to reimbursement.7The influence of the statistical methods used to calculate readmission rates has not been extensively explored. Variation exists in the methods used to calculate readmission rates: in Australia, patient-level covariates are not adjusted for;8 in the United States, Medicare uses a hierarchical model to adjust for patient age, sex and comorbidity, in addition to clustering of patients within hospitals.9 Furthermore, the patient populations included when calculating readmission rates vary, from a limited group of diagnoses in the US4 to almost all admissions to hospital in Great Britain.10Therefore, the methods used to determine readmission rates vary extensively with no apparent consensus on how these statistics should be calculated. We calculated adjusted hospital-specific rates of death or urgent readmission within 30 days and hospital rankings, varying 2 key factors relevant to generating these statistics: the completeness of confounder adjustment and the inclusion of all admissions to hospital versus a single admission per patient. Our goal was to determine the reliability of early death or urgent readmission rates as an indicator of hospital performance.  相似文献   

3.

Background:

Early physician follow-up after discharge is associated with lower rates of death and readmission among patients with heart failure. We explored whether physician continuity further influences outcomes after discharge.

Methods:

We used data from linked administrative databases for all adults aged 20 years or more in the province of Alberta who were discharged alive from hospital between January 1999 and June 2009 with a first-time diagnosis of heart failure. We used Cox proportional hazard models with time-dependent covariates to analyze the effect of follow-up with a familiar physician within the first month after discharge on the primary outcome of death or urgent all-cause readmission over 6 months. A familiar physician was defined as one who had seen the patient at least twice in the year before the index admission or once during the index admission.

Results:

In the first month after discharge, 5336 (21.9%) of the 24 373 identified patients had no follow-up visits, 16 855 (69.2%) saw a familiar physician, and 2182 (9.0%) saw unfamiliar physician(s) exclusively. The risk of death or unplanned readmission during the 6-month observation period was lower among patients who saw a familiar physician (43.6%; adjusted hazard ratio [HR] 0.87, 95% confidence interval [CI] 0.83–0.91) or an unfamiliar physician (43.6%; adjusted HR 0.90, 95% CI 0.83–0.97) for early follow-up visits, as compared with patients who had no follow-up visits (62.9%). Taking into account all follow-up visits over the 6-month period, we found that the risk of death or urgent readmission was lower among patients who had all of their visits with a familiar physician than among those followed by unfamiliar physicians (adjusted HR 0.91, 95% CI 0.85–0.98).

Interpretation:

Early physician follow-up after discharge and physician continuity were both associated with better outcomes among patients with heart failure. Research is needed to explore whether physician continuity is important for other conditions and in settings other than recent hospital discharge.Hospital care accounts for almost one-third of health care spending, and unplanned readmissions within 30 days after discharge cost more than $20 billion each year in the United States and Canada.1 Heart failure is one of the most common reasons for admission to hospital and is associated with a high risk of readmission.1 Although the prognosis for patients with heart failure has improved over the past decade, the risk of early death or readmission after discharge is still high and is increasing.2 Prompt follow-up of patients with heart failure has been associated with lower rates of death and readmission,3,4 and 30-day follow-up has been included as a quality-of-care indicator in Canada.5It is unclear, however, whether the postdischarge visits should be with the physician who previously saw the patient or with any physician. Results of studies exploring the association between provider continuity and postdischarge outcomes have been inconclusive and the studies have included few patients with heart failure.69 Intuitively, one might consider physician continuity important for patients with heart failure discharged from hospital, given their age, high comorbidity burdens and complex treatment regimens. However, a robust evidence base and multiple guidelines with consistent messaging on key management principles have made physician continuity potentially less important.We designed this study to determine whether physician continuity influenced postdischarge outcomes among patients with heart failure beyond the influence of early physician follow-up.  相似文献   

4.

Background

The prevalence of frailty increases with age in older adults, but frailty is largely unreported for younger adults, where its associated risk is less clear. Furthermore, less is known about how frailty changes over time among younger adults. We estimated the prevalence and outcomes of frailty, in relation to accumulation of deficits, across the adult lifespan.

Methods

We analyzed data for community-dwelling respondents (age 15–102 years at baseline) to the longitudinal component of the National Population Health Survey, with seven two-year cycles, beginning 1994–1995. The outcomes were death, use of health services and change in health status, measured in terms of a Frailty Index constructed from 42 self-reported health variables.

Results

The sample consisted of 14 713 respondents (54.2% women). Vital status was known for more than 99% of the respondents. The prevalence of frailty increased with age, from 2.0% (95% confidence interval [CI] 1.7%–2.4%) among those younger than 30 years to 22.4% (95% CI 19.0%–25.8%) for those older than age 65, including 43.7% (95% CI 37.1%–50.8%) for those 85 and older. At all ages, the 160-month mortality rate was lower among relatively fit people than among those who were frail (e.g., 2% v. 16% at age 40; 42% v. 83% at age 75 or older). These relatively fit people tended to remain relatively fit over time. Relative to all other groups, a greater proportion of the most frail people used health services at baseline (28.3%, 95% CI 21.5%–35.5%) and at each follow-up cycle (26.7%, 95% CI 15.4%–28.0%).

Interpretation

Deficits accumulated with age across the adult spectrum. At all ages, a higher Frailty Index was associated with higher mortality and greater use of health care services. At younger ages, recovery to the relatively fittest state was common, but the chance of complete recovery declined with age.On average, health declines with age. Even so, at any given age the health status across a group of people varies. Variability in health status and in the risk for adverse outcomes for people of the same age is referred to as “frailty,” which typically has been studied among older adults.1,2 Although frailty can be operationalized in different ways, in general, people who report having no health problems are more likely to be fit than people who report having many problems. Unsurprisingly, the chance of adverse outcomes — death, admission to a long-term care institution or to hospital, or worsening of health status — increases with the number of problems that the individual has.3,4The antecedents of frailty appear to arise some time before old age,59 although how frailty emerges as people age, whether it carries the same risk at all ages and the extent to which it fluctuates are less clear.9,10 In the study reported here, we evaluated changes in relative fitness and frailty across the adult lifespan. Our objectives were to investigate the effect of age on the prevalence of relative fitness and frailty, the characteristics of people who were relatively fit in comparison with those who were frail across the adult lifespan, the effects of fitness and frailty on mortality in relation to age and sex, and the characteristics of people who maintained the highest levels of fitness across a decade relative to those who at any point reported any decline.  相似文献   

5.
6.

Background:

Whereas most studies have focused on euthanasia and physician-assisted suicide, few have dealt comprehensively with other critical interventions administered at the end of life. We surveyed cancer patients, family caregivers, oncologists and members of the general public to determine their attitudes toward such interventions.

Methods:

We administered a questionnaire to four groups about their attitudes toward five end-of-life interventions — withdrawal of futile life-sustaining treatment, active pain control, withholding of life-sustaining measures, active euthanasia and physician-assisted suicide. We performed multivariable analyses to compare attitudes and to identify sociodemographic characteristics associated with the attitudes.

Results:

A total of 3840 individuals — 1242 cancer patients, 1289 family caregivers and 303 oncologists from 17 hospitals, as well as 1006 members of the general Korean population — participated in the survey. A large majority in each of the groups supported withdrawal of futile life-sustaining treatment (87.1%–94.0%) and use of active pain control (89.0%–98.4%). A smaller majority (60.8%–76.0%) supported withholding of life-sustaining treatment. About 50% of those in the patient and general population groups supported active euthanasia or physician-assisted suicide, as compared with less than 40% of the family caregivers and less than 10% of the oncologists. Higher income was significantly associated with approval of the withdrawal of futile life-sustaining treatment and the practice of active pain control. Older age, male sex and having no religion were significantly associated with approval of withholding of life-sustaining measures. Older age, male sex, having no religion and lower education level were significantly associated with approval of active euthanasia and physician-assisted suicide.

Interpretation:

Although the various participant groups shared the same attitude toward futile and ameliorative end-of-life care (the withdrawal of futile life-sustaining treatment and the use of active pain control), oncologists had a more negative attitude than those in the other groups toward the active ending of life (euthanasia and physician-assisted suicide).As more attention turns to when and how the lives of terminally ill patients end in the clinical setting, debate about the issues of euthanasia and physician-assisted suicide grows.15 Euthanasia has been discussed in Europe and the United States for more than a century, and the public has become more accepting of it.410 Announcing its first-ever ruling in favour of an unconscious patient’s right to die with dignity, the Korean Supreme Court recently ruled that doctors of an elderly woman in a persistent vegetative state remove the artificial respirator from her on the basis of her presumed wishes.11 A public debate aimed at legalizing withdrawal of futile life-sustaining treatment, exposure to stories of dying patients in the mass media, and the court’s decision may have led to a greater awareness of, and sensibility toward, the rights of terminally ill patients. In 2000, only 16.5% of 535 Korean oncologists surveyed said that they would prescribe morphine for severe cancer pain, and more than half of 655 patients who had pain said they had inadequate pain management.12Although much has been written about attitudes toward how the general public would choose to die in the clinical setting,4,13 most studies have focused on only euthanasia and physician-assisted suicide.1418 We conducted a large survey to examine attitudes among cancer patients, family caregivers, oncologists and members of the general public toward critical interventions at the end of life of terminally ill patients.  相似文献   

7.

Background:

The San Francisco Syncope Rule has been proposed as a clinical decision rule for risk stratification of patients presenting to the emergency department with syncope. It has been validated across various populations and settings. We undertook a systematic review of its accuracy in predicting short-term serious outcomes.

Methods:

We identified studies by means of systematic searches in seven electronic databases from inception to January 2011. We extracted study data in duplicate and used a bivariate random-effects model to assess the predictive accuracy and test characteristics.

Results:

We included 12 studies with a total of 5316 patients, of whom 596 (11%) experienced a serious outcome. The prevalence of serious outcomes across the studies varied between 5% and 26%. The pooled estimate of sensitivity of the San Francisco Syncope Rule was 0.87 (95% confidence interval [CI] 0.79–0.93), and the pooled estimate of specificity was 0.52 (95% CI 0.43–0.62). There was substantial between-study heterogeneity (resulting in a 95% prediction interval for sensitivity of 0.55–0.98). The probability of a serious outcome given a negative score with the San Francisco Syncope Rule was 5% or lower, and the probability was 2% or lower when the rule was applied only to patients for whom no cause of syncope was identified after initial evaluation in the emergency department. The most common cause of false-negative classification for a serious outcome was cardiac arrhythmia.

Interpretation:

The San Francisco Syncope Rule should be applied only for patients in whom no cause of syncope is evident after initial evaluation in the emergency department. Consideration of all available electrocardiograms, as well as arrhythmia monitoring, should be included in application of the San Francisco Syncope Rule. Between-study heterogeneity was likely due to inconsistent classification of arrhythmia.Syncope is defined as sudden, transient loss of consciousness with the inability to maintain postural tone, followed by spontaneous recovery and return to pre-existing neurologic function.15 It represents a common clinical problem, accounting for 1%–3% of visits to the emergency department and up to 6% of admissions to acute care hospitals.6,7Assessment of syncope in patients presenting to the emergency department is challenging because of the heterogeneity of underlying pathophysiologic processes and diseases. Although many underlying causes of syncope are benign, others are associated with substantial morbidity or mortality, including cardiac arrhythmia, myocardial infarction, pulmonary embolism and occult hemorrhage.4,810 Consequently, a considerable proportion of patients with benign causes of syncope are admitted for inpatient evaluation.11,12 Therefore, risk stratification that allows for the safe discharge of patients at low risk of a serious outcome is important for efficient management of patients in emergency departments and for reduction of costs associated with unnecessary diagnostic workup.12,13In recent years, various prediction rules based on the probability of an adverse outcome after an episode of syncope have been proposed.3,1416 However, the San Francisco Syncope Rule, derived by Quinn and colleagues in 2004,3 is the only prediction rule for serious outcomes that has been validated in a variety of populations and settings. This simple, five-step clinical decision rule is intended to identify patients at low risk of short-term serious outcomes3,17 (Box 1).

Box 1:

San Francisco Syncope Rule3

AimPrediction of short-term (within 30 days) serious outcomes in patients presenting to the emergency department with syncope.DefinitionsSyncope: Transient loss of consciousness with return to baseline neurologic function. Trauma-associated and alcohol- or drug-related loss of consciousness excluded, as is definite seizure or altered mental status.Serious outcome: Death, myocardial infarction, arrhythmia, pulmonary embolism, stroke, subarachnoid hemorrhage, significant hemorrhage or any condition causing or likely to cause a return visit to the emergency department and admission to hospital for a related event.Selection of predictors in multivariable analysis: Fifty predictor variables were evaluated for significant associations with a serious outcome and combined to create a minimal set of predictors that are highly sensitive and specific for prediction of a serious outcome.Clinical decision ruleFive risk factors, indicated by the mnemonic “CHESS,” were identified to predict patients at high risk of a serious outcome:
  • C – History of congestive heart failure
  • H – Hematocrit < 30%
  • E – Abnormal findings on 12-lead ECG or cardiac monitoring17 (new changes or nonsinus rhythm)
  • S – History of shortness of breath
  • S – Systolic blood pressure < 90 mm Hg at triage
Note: ECG = electrocardiogram.The aim of this study was to conduct a systematic review and meta-analysis of the accuracy of the San Francisco Syncope Rule in predicting short-term serious outcome for patients presenting to the emergency department with syncope.  相似文献   

8.
Background:Disability-related considerations have largely been absent from the COVID-19 response, despite evidence that people with disabilities are at elevated risk for acquiring COVID-19. We evaluated clinical outcomes in patients who were admitted to hospital with COVID-19 with a disability compared with patients without a disability.Methods:We conducted a retrospective cohort study that included adults with COVID-19 who were admitted to hospital and discharged between Jan. 1, 2020, and Nov. 30, 2020, at 7 hospitals in Ontario, Canada. We compared in-hospital death, admission to the intensive care unit (ICU), hospital length of stay and unplanned 30-day readmission among patients with and without a physical disability, hearing or vision impairment, traumatic brain injury, or intellectual or developmental disability, overall and stratified by age (≤ 64 and ≥ 65 yr) using multivariable regression, controlling for sex, residence in a long-term care facility and comorbidity.Results:Among 1279 admissions to hospital for COVID-19, 22.3% had a disability. We found that patients with a disability were more likely to die than those without a disability (28.1% v. 17.6%), had longer hospital stays (median 13.9 v. 7.8 d) and more readmissions (17.6% v. 7.9%), but had lower ICU admission rates (22.5% v. 28.3%). After adjustment, there were no statistically significant differences between those with and without disabilities for in-hospital death or admission to ICU. After adjustment, patients with a disability had longer hospital stays (rate ratio 1.36, 95% confidence interval [CI] 1.19–1.56) and greater risk of readmission (relative risk 1.77, 95% CI 1.14–2.75). In age-stratified analyses, we observed longer hospital stays among patients with a disability than in those without, in both younger and older subgroups; readmission risk was driven by younger patients with a disability.Interpretation:Patients with a disability who were admitted to hospital with COVID-19 had longer stays and elevated readmission risk than those without disabilities. Disability-related needs should be addressed to support these patients in hospital and after discharge.

A successful public health response to the COVID-19 pandemic requires accurate and timely identification of, and support for, high-risk groups. There is increasing recognition that marginalized groups, including congregate care residents, racial and ethnic minorities, and people experiencing poverty, have elevated incidence of COVID-19.1,2 Older age and comorbidities such as diabetes are also risk factors for severe COVID-19 outcomes.3,4 One potential high-risk group that has received relatively little attention is people with disabilities.The World Health Organization estimates there are 1 billion people with disabilities globally.5 In North America, the prevalence of disability is 20%, with one-third of people older than 65 years having a disability.6 Disabilities include physical disabilities, hearing and vision impairments, traumatic brain injury and intellectual or developmental disabilities.5,6 Although activity limitations experienced by people with disabilities are heterogeneous,5,6 people with disabilities share high rates of risk factors for acquiring COVID-19, including poverty, residence in congregate care and being members of racialized communities.79 People with disabilities may be more reliant on close contact with others to meet their daily needs, and some people with disabilities, especially intellectual developmental disabilities, may have difficulty following public health rules. Once they acquire SARS-CoV-2 infection, people with disabilities may be at risk for severe outcomes because they have elevated rates of comorbidities.10 Some disabilities (e.g., spinal cord injuries and neurologic disabilities) result in physiologic changes that increase vulnerability to respiratory diseases and may mask symptoms of acute respiratory disease, which may delay diagnosis.1113 There have also been reports of barriers to high-quality hospital care for patients with disabilities who have COVID-19, including communication issues caused by the use of masks and restricted access to support persons.1417Some studies have suggested that patients with disabilities and COVID-19 are at elevated risk for severe disease and death, with most evaluating intellectual or developmental disability.13,1826 Yet, consideration of disability-related needs has largely been absent from the COVID-19 response, with vaccine eligibility driven primarily by age and medical comorbidity, limited accommodations made for patients with disabilities who are in hospital, and disability data often not being captured in surveillance programs.1417 To inform equitable pandemic supports, there is a need for data on patients with a broad range of disabilities who have COVID-19. We sought to evaluate standard clinical outcomes in patients admitted to hospital with COVID-1927 (i.e., in-hospital death, intensive care unit [ICU] admission, hospital length of stay and unplanned 30-d readmission) for patients with and without a disability, overall and stratified by age. We hypothesized that patients with a disability would have worse outcomes because of a greater prevalence of comorbidities,10 physiologic characteristics that increase morbidity risk1113 and barriers to high-quality hospital care.1417  相似文献   

9.

Background:

Although warfarin has been extensively studied in clinical trials, little is known about rates of hemorrhage attributable to its use in routine clinical practice. Our objective was to examine incident hemorrhagic events in a large population-based cohort of patients with atrial fibrillation who were starting treatment with warfarin.

Methods:

We conducted a population-based cohort study involving residents of Ontario (age ≥ 66 yr) with atrial fibrillation who started taking warfarin between Apr. 1, 1997, and Mar. 31, 2008. We defined a major hemorrhage as any visit to hospital for hemorrage. We determined crude rates of hemorrhage during warfarin treatment, overall and stratified by CHADS2 score (congestive heart failure, hypertension, age ≥ 75 yr, diabetes mellitus and prior stroke, transient ischemic attack or thromboembolism).

Results:

We included 125 195 patients with atrial fibrillation who started treatment with warfarin during the study period. Overall, the rate of hemorrhage was 3.8% (95% confidence interval [CI] 3.8%–3.9%) per person-year. The risk of major hemorrhage was highest during the first 30 days of treatment. During this period, rates of hemorrhage were 11.8% (95% CI 11.1%–12.5%) per person-year in all patients and 16.7% (95% CI 14.3%–19.4%) per person-year among patients with a CHADS2 scores of 4 or greater. Over the 5-year follow-up, 10 840 patients (8.7%) visited the hospital for hemorrhage; of these patients, 1963 (18.1%) died in hospital or within 7 days of being discharged.

Interpretation:

In this large cohort of older patients with atrial fibrillation, we found that rates of hemorrhage are highest within the first 30 days of warfarin therapy. These rates are considerably higher than the rates of 1%–3% reported in randomized controlled trials of warfarin therapy. Our study provides timely estimates of warfarin-related adverse events that may be useful to clinicians, patients and policy-makers as new options for treatment become available.Atrial fibrillation is a major risk factor for stroke and systemic embolism, and strong evidence supports the use of the anticoagulant warfarin to reduce this risk.13 However, warfarin has a narrow therapeutic range and requires regular monitoring of the international normalized ratio to optimize its effectiveness and minimize the risk of hemorrhage.4,5 Although rates of major hemorrhage reported in trials of warfarin therapy typically range between 1% and 3% per person-year,611 observational studies suggest that rates may be considerably higher when warfarin is prescribed outside of a clinical trial setting,1215 approaching 7% per person-year in some studies.1315 The different safety profiles derived from clinical trials and observational data may reflect the careful selection of patients, precise definitions of bleeding and close monitoring in the trial setting. Furthermore, although a few observational studies suggest that hemorrhage rates are higher than generally appreciated, these studies involve small numbers of patients who received care in specialized settings.1416 Consequently, the generalizability of their results to general practice may be limited.More information regarding hemorrhage rates during warfarin therapy is particularly important in light of the recent introduction of new oral anticoagulant agents such as dabigatran, rivaroxaban and apixaban, which may be associated with different outcome profiles.1719 There are currently no large studies offering real-world, population-based estimates of hemorrhage rates among patients taking warfarin, which are needed for future comparisons with new anticoagulant agents once they are widely used in routine clinical practice.20We sought to describe the risk of incident hemorrhage in a large population-based cohort of patients with atrial fibrillation who had recently started warfarin therapy.  相似文献   

10.
11.
Schultz AS  Finegan B  Nykiforuk CI  Kvern MA 《CMAJ》2011,183(18):E1334-E1344

Background:

Many hospitals have adopted smoke-free policies on their property. We examined the consequences of such polices at two Canadian tertiary acute-care hospitals.

Methods:

We conducted a qualitative study using ethnographic techniques over a six-month period. Participants (n = 186) shared their perspectives on and experiences with tobacco dependence and managing the use of tobacco, as well as their impressions of the smoke-free policy. We interviewed inpatients individually from eight wards (n = 82), key policy-makers (n = 9) and support staff (n = 14) and held 16 focus groups with health care providers and ward staff (n = 81). We also reviewed ward documents relating to tobacco dependence and looked at smoking-related activities on hospital property.

Results:

Noncompliance with the policy and exposure to secondhand smoke were ongoing concerns. Peoples’ impressions of the use of tobacco varied, including divergent opinions as to whether such use was a bad habit or an addiction. Treatment for tobacco dependence and the management of symptoms of withdrawal were offered inconsistently. Participants voiced concerns over patient safety and leaving the ward to smoke.

Interpretation:

Policies mandating smoke-free hospital property have important consequences beyond noncompliance, including concerns over patient safety and disruptions to care. Without adequately available and accessible support for withdrawal from tobacco, patients will continue to face personal risk when they leave hospital property to smoke.Canadian cities and provinces have passed smoking bans with the goal of reducing people’s exposure to secondhand smoke in workplaces, public spaces and on the property adjacent to public buildings.1,2 In response, Canadian health authorities and hospitals began implementing policies mandating smoke-free hospital property, with the goals of reducing the exposure of workers, patients and visitors to tobacco smoke while delivering a public health message about the dangers of smoking.25 An additional anticipated outcome was the reduced use of tobacco among patients and staff. The impetuses for adopting smoke-free policies include public support for such legislation and the potential for litigation for exposure to second-hand smoke.2,4Tobacco use is a modifiable risk factor associated with a variety of cancers, cardiovascular diseases and respiratory conditions.611 Patients in hospital who use tobacco tend to have more surgical complications and exacerbations of acute and chronic health conditions than patients who do not use tobacco.611 Any policy aimed at reducing exposure to tobacco in hospitals is well supported by evidence, as is the integration of interventions targetting tobacco dependence.12 Unfortunately, most of the nearly five million Canadians who smoke will receive suboptimal treatment,13 as the routine provision of interventions for tobacco dependence in hospital settings is not a practice norm.1416 In smoke-free hospitals, two studies suggest minimal support is offered for withdrawal, 17,18 and one reports an increased use of nicotine-replacement therapy after the implementation of the smoke-free policy.19Assessments of the effectiveness of smoke-free policies for hospital property tend to focus on noncompliance and related issues of enforcement.17,20,21 Although evidence of noncompliance and litter on hospital property2,17,20 implies ongoing exposure to tobacco smoke, half of the participating hospital sites in one study reported less exposure to tobacco smoke within hospital buildings and on the property.18 In addition, there is evidence to suggest some decline in smoking among staff.18,19,21,22We sought to determine the consequences of policies mandating smoke-free hospital property in two Canadian acute-care hospitals by eliciting lived experiences of the people faced with enacting the policies: patients and health care providers. In addition, we elicited stories from hospital support staff and administrators regarding the policies.  相似文献   

12.

Background:

The frequency of polypectomy is an important indicator of quality assurance for population-based colorectal cancer screening programs. Although administrative databases of physician claims provide population-level data on the performance of polypectomy, the accuracy of the procedure codes has not been examined. We determined the level of agreement between physician claims for polypectomy and documentation of the procedure in endoscopy reports.

Methods:

We conducted a retrospective cohort study involving patients aged 50–80 years who underwent colonoscopy at seven study sites in Montréal, Que., between January and March 2007. We obtained data on physician claims for polypectomy from the Régie de l’Assurance Maladie du Québec (RAMQ) database. We evaluated the accuracy of the RAMQ data against information in the endoscopy reports.

Results:

We collected data on 689 patients who underwent colonoscopy during the study period. The sensitivity of physician claims for polypectomy in the administrative database was 84.7% (95% confidence interval [CI] 78.6%–89.4%), the specificity was 99.0% (95% CI 97.5%–99.6%), concordance was 95.1% (95% CI 93.1%–96.5%), and the kappa value was 0.87 (95% CI 0.83–0.91).

Interpretation:

Despite providing a reasonably accurate estimate of the frequency of polypectomy, physician claims underestimated the number of procedures performed by more than 15%. Such differences could affect conclusions regarding quality assurance if used to evaluate population-based screening programs for colorectal cancer. Even when a high level of accuracy is anticipated, validating physician claims data from administrative databases is recommended.Population-based screening programs for colorectal cancer rely heavily on the performance of colonoscopy as either the initial examination or as the follow-up to a positive screening by virtual colonography, double-contrast barium enema or fecal occult blood testing. Colonoscopy is the only screening examination accepted at 10-year intervals among people at average risk without significant polyps found. It allows direct visualization of the entire colon and rectum and permits removal of adenomatous polyps, precursors of colorectal cancer. The frequency of polypectomy is an important indicator of quality assurance for colorectal cancer screening programs.In the province of Quebec, physicians are reimbursed for medical services by the Régie de l’Assurance Maladie du Québec (RAMQ), the government agency responsible for administering the provincial health insurance plan. Physicians receive additional remuneration for performing a polypectomy if they include the procedure code in their claim.Data from physician claims databases are commonly used in health services research,17 even though the data are collected for administrative purposes and physician reimbursement. Procedure codes in physician claims databases are presumed to have a very high level of agreement with data in medical charts.8 A physician making a claim will need to submit the diagnostic code and, when applicable, the procedure code. Studies that rely on physician claims databases can be divided into those that examine the diagnostic codes entered and those that examine the procedure codes entered. Few studies have attempted to validate procedure codes, and often not as the primary study objective.914We conducted a study to determine the level of agreement between physician claims for polypectomy and documentation of the procedure in endoscopy reports.  相似文献   

13.

Background

Inuit have not experienced an epidemic in type 2 diabetes mellitus, and it has been speculated that they may be protected from obesity’s metabolic consequences. We conducted a population-based screening for diabetes among Inuit in the Canadian Arctic and evaluated the association of visceral adiposity with diabetes.

Methods

A total of 36 communities participated in the International Polar Year Inuit Health Survey. Of the 2796 Inuit households approached, 1901 (68%) participated, with 2595 participants. Households were randomly selected, and adult residents were invited to participate. Assessments included anthropometry and fasting plasma lipids and glucose, and, because of survey logistics, only 32% of participants underwent a 75 g oral glucose tolerance test. We calculated weighted prevalence estimates of metabolic risk factors for all participants.

Results

Participants’ mean age was 43.3 years; 35% were obese, 43.8% had an at-risk waist, and 25% had an elevated triglyceride level. Diabetes was identified in 12.2% of participants aged 50 years and older and in 1.9% of those younger than 50 years. A hypertriglyceridemic-waist phenotype was a strong predictor of diabetes (odds ratio [OR] 8.6, 95% confidence interval [CI] 2.1–34.6) in analyses adjusted for age, sex, region, family history of diabetes, education and use of lipid-lowering medications.

Interpretation

Metabolic risk factors were prevalent among Inuit. Our results suggest that Inuit are not protected from the metabolic consequences of obesity, and that their rate of diabetes prevalence is now comparable to that observed in the general Canadian population. Assessment of waist circumference and fasting triglyceride levels could represent an efficient means for identifying Inuit at high risk for diabetes.Indigenous people across the Arctic continue to undergo cultural transitions that affect all dimensions of life, with implications for emerging obesity and changes in patterns of disease burden.13 A high prevalence of obesity among Canadian Inuit has been noted,3,4 and yet studies have suggested that the metabolic consequences of obesity may not be as severe among Inuit as they are in predominantly Caucasian or First Nations populations.46 Conversely, the prevalence of type 2 diabetes mellitus, which was noted to be rare among Inuit in early studies,7,8 now matches or exceeds that of predominately Caucasian comparison populations in Alaska and Greenland.911 However, in Canada, available reports suggest that diabetes prevalence among Inuit remains below that of the general Canadian population.3,12Given the rapid changes in the Arctic and a lack of comprehensive and uniform screening assessments, we used the International Polar Year Inuit Health Survey for Adults 2007–2008 to assess the current prevalence of glycemia and the toll of age and adiposity on glycemia in this population. However, adiposity is heterogeneous, and simple measures of body mass index (BMI) in kg/m2 and waist circumference do not measure visceral adiposity (or intra-abdominal adipose tissue), which is considered more deleterious than subcutaneous fat.13 Therefore, we evaluated the “hypertriglyceridemic-waist” phenotype (i.e., the presence of both an at-risk waist circumference and an elevated triglyceride level) as a proxy indicator of visceral fat.1315  相似文献   

14.

Background:

Telehealthcare has the potential to provide care for long-term conditions that are increasingly prevalent, such as asthma. We conducted a systematic review of studies of telehealthcare interventions used for the treatment of asthma to determine whether such approaches to care are effective.

Methods:

We searched the Cochrane Airways Group Specialised Register of Trials, which is derived from systematic searches of bibliographic databases including CENTRAL (the Cochrane Central Register of Controlled Trials), MEDLINE, Embase, CINAHL (Cumulative Index to Nursing and Allied Health Literature) and PsycINFO, as well as other electronic resources. We also searched registers of ongoing and unpublished trials. We were interested in studies that measured the following outcomes: quality of life, number of visits to the emergency department and number of admissions to hospital. Two reviewers identified studies for inclusion in our meta-analysis. We extracted data and used fixedeffect modelling for the meta-analyses.

Results:

We identified 21 randomized controlled trials for inclusion in our analysis. The methods of telehealthcare intervention these studies investigated were the telephone and video- and Internet-based models of care. Meta-analysis did not show a clinically important improvement in patients’ quality of life, and there was no significant change in the number of visits to the emergency department over 12 months. There was a significant reduction in the number of patients admitted to hospital once or more over 12 months (risk ratio 0.25 [95% confidence interval 0.09 to 0.66]).

Interpretation:

We found no evidence of a clinically important impact on patients’ quality of life, but telehealthcare interventions do appear to have the potential to reduce the risk of admission to hospital, particularly for patients with severe asthma. Further research is required to clarify the cost-effectiveness of models of care based on telehealthcare.There has been an increase in the prevalence of asthma in recent decades,13 and the Global Initiative for Asthma estimates that 300 million people worldwide now have the disease.4 The highest prevalence rates (30%) are seen in economically developed countries.58 There has also been an increase in the prevalence of asthma affecting both children and adults in many economically developing and transition countries.911Asthma’s high burden of disease requires improvements in access to treatments.7,12,13 Patterns of help-seeking behaviour are also relevant: delayed reporting is associated with morbidity and the need for emergency care.It is widely believed that telehealthcare interventions may help address some of the challenges posed by asthma by enabling remote delivery of care, facilitating timely access to health advice, supporting self-monitoring and medication concordance, and educating patients on avoiding triggers.1416 The precise role of these technologies in the management of care for people with long-term respiratory conditions needs to be established.17The objective of this study was to systematically review the effectiveness of telehealthcare interventions among people with asthma in terms of quality of life, number of visits to the emergency department and admissions to hospital for exacerbations of asthma.  相似文献   

15.
Rachel Mann  Joy Adamson  Simon M. Gilbody 《CMAJ》2012,184(8):E424-E430

Background:

Guidelines for perinatal mental health care recommend the use of two case-finding questions about depressed feelings and loss of interest in activities, despite the absence of validation studies in this context. We examined the diagnostic accuracy of these questions and of a third question about the need for help asked of women receiving perinatal care.

Methods:

We evaluated self-reported responses to two case-finding questions against an interviewer-assessed diagnostic standard (DSM-IV criteria for major depressive disorder) among 152 women receiving antenatal care at 26–28 weeks’ gestation and postnatal care at 5–13 weeks after delivery. Among women who answered “yes” to either question, we assessed the usefulness of asking a third question about the need for help. We calculated sensitivity, specificity and likelihood ratios for the two case-finding questions and for the added question about the need for help.

Results:

Antenatally, the two case-finding questions had a sensitivity of 100% (95% confidence interval [CI] 77%–100%), a specificity of 68% (95% CI 58%–76%), a positive likelihood ratio of 3.03 (95% CI 2.28–4.02) and a negative likelihood ratio of 0.041 (95% CI 0.003–0.63) in identifying perinatal depression. Postnatal results were similar. Among the women who screened positive antenatally, the additional question about the need for help had a sensitivity of 58% (95% CI 38%–76%), a specificity of 91% (95% CI 78%–97%), a positive likelihood ratio of 6.86 (95% CI 2.16–21.7) and a negative likelihood ratio of 0.45 (95% CI 0.25–0.80), with lower sensitivity and higher specificity postnatally.

Interpretation:

Negative responses to both of the case-finding questions showed acceptable accuracy for ruling out perinatal depression. For positive responses, the use of a third question about the need for help improved specificity and the ability to rule in depression.The occurrence of depressive symptoms during the perinatal period is well-recognized. The estimated prevalence is 7.4%–20% antenatally1,2 and up to 19.2% in the first three postnatal months.3 Antenatal depression is associated with malnutrition, substance and alcohol abuse, poor self-reported health, poor use of antenatal care services and adverse neonatal outcomes.4 Postnatal depression has a substantial impact on the mother and her partner, the family, mother–baby interaction and on the longer-term emotional and cognitive development of the baby.5Screening strategies to identify perinatal depression have been advocated, and specific questionnaires for use in the perinatal period, such as the Edinburgh Postnatal Depression Scale,6 were developed. However, in their current recommendations, the UK National Screening Committee7 and the US Committee on Obstetric Practice8 state that there is insufficient evidence to support the implementation of universal perinatal screening programs. The initial decision in 2001 by the National Screening Committee to not support universal perinatal screening9 attracted particular controversy in the United Kingdom; some service providers subsequently withdrew resources for treatment of postnatal depression, and subsequent pressure by perinatal community practitioners led to modification of the screening guidance in order to clarify the role of screening questionnaires in the assessment of perinatal depression.10In 2007, the National Institute for Health and Clinical Excellence issued clinical guidelines for perinatal mental health care in the UK, which included guidance on the use of questionnaires to identify antenatal and postnatal depression.11 In this guidance, a case-finding approach to identify perinatal depression was strongly recommended; it involved the use of two case-finding questions (sometimes referred to as the Whooley questions), and an additional question about the need for help asked of women who answered “yes” to either of the initial questions (Box 1).

Box 1:

Case-finding questions recommended for the identification of perinatal depression10

  • “During the past month, have you often been bothered by feeling down, depressed or hopeless?”
  • “During the past month, have you often been bothered by having little interest or pleasure in doing things?”
  • A third question should be considered if the woman answers “yes” to either of the initial screening questions: “Is this something you feel you need or want help with?”
Useful case-finding questions should be both sensitive and specific so they accurately identify those with and without the condition. The two case-finding questions have been validated in primary care samples12,13 and examined in other clinical populations1416 and are endorsed in recommendations by US and Canadian bodies for screening depression in adults.17,18 However, at the time the guidance from the National Institute for Health and Clinical Excellence was issued, there were no validation studies conducted in perinatal populations. A recent systematic review19 identified one study conducted in the United States that validated the two questions against established diagnostic criteria in 506 women attending well-child visits postnatally;20 sensitivity and specificity of the questions were 100% and 44% respectively at four weeks. The review failed to identify studies that validated the two questions and the additional question about the need for help against a gold-standard measure.We conducted a validation study to assess the diagnostic accuracy of this brief case-finding approach against gold-standard psychiatric diagnostic criteria for depression in a population of women receiving perinatal care.  相似文献   

16.
Gronich N  Lavi I  Rennert G 《CMAJ》2011,183(18):E1319-E1325

Background:

Combined oral contraceptives are a common method of contraception, but they carry a risk of venous and arterial thrombosis. We assessed whether use of drospirenone was associated with an increase in thrombotic risk relative to third-generation combined oral contraceptives.

Methods:

Using computerized records of the largest health care provider in Israel, we identified all women aged 12 to 50 years for whom combined oral contraceptives had been dispensed between Jan. 1, 2002, and Dec. 31, 2008. We followed the cohort until 2009. We used Poisson regression models to estimate the crude and adjusted rate ratios for risk factors for venous thrombotic events (specifically deep vein thrombosis and pulmonary embolism) and arterial thromboic events (specifically transient ischemic attack and cerebrovascular accident). We performed multivariable analyses to compare types of contraceptives, with adjustment for the various risk factors.

Results:

We identified a total of 1017 (0.24%) venous and arterial thrombotic events among 431 223 use episodes during 819 749 woman-years of follow-up (6.33 venous events and 6.10 arterial events per 10 000 woman-years). In a multivariable model, use of drospirenone carried an increased risk of venous thrombotic events, relative to both third-generation combined oral contraceptives (rate ratio [RR] 1.43, 95% confidence interval [CI] 1.15–1.78) and second-generation combined oral contraceptives (RR 1.65, 95% CI 1.02–2.65). There was no increase in the risk of arterial thrombosis with drospirenone.

Interpretation:

Use of drospirenone-containing oral contraceptives was associated with an increased risk of deep vein thrombosis and pulmonary embolism, but not transient ischemic attack or cerebrovascular attack, relative to second- and third-generation combined oral contraceptives.Oral hormonal therapy is the preferred method of contraception, especially among young women. In the United States in 2002, 12 million women were using “the pill.”1 In a survey of households in Great Britain conducted in 2005 and 2006, one-quarter of women aged 16 to 49 years of age were using this form of contraception.2 A large variety of combined oral contraceptive preparations are available, differing in terms of estrogen dose and in terms of the dose and type of the progestin component. Among preparations currently in use, the estrogen dose ranges from 15 to 35 μg, and the progestins are second-generation, third-generation or newer. The second-generation progestins (levonorgestrel and norgestrel), which are derivatives of testosterone, have differing degrees of androgenic and estrogenic activities. The structure of these agents was modified to reduce the androgenic activity, thus producing the third-generation progestins (desogestrel, gestodene and norgestimate). Newer progestins are chlormadinone acetate, a derivative of progesterone, and drospirenone, an analogue of the aldosterone antagonist spironolactone having antimineralo-corticoid and antiandrogenic activities. Drospirenone is promoted as causing less weight gain and edema than other forms of oral contraceptives, but few well-designed studies have compared the minor adverse effects of these drugs.3The use of oral contraceptives has been reported to confer an increased risk of venous and arterial thrombotic events,47 specifically an absolute risk of venous thrombosis of 6.29 per 10 000 woman-years, compared with 3.01 per 10 000 woman-years among nonusers.8 It has long been accepted that there is a dose–response relationship between estrogen and the risk of venous thrombotic events. Reducing the estrogen dose from 50 μg to 20–30 μg has reduced the risk.9 Studies published since the mid-1990s have suggested a greater risk of venous thrombotic events with third-generation oral contraceptives than with second-generation formulations,1013 indicating that the risk is also progestin-dependent. The pathophysiological mechanism of the risk with different progestins is unknown. A twofold increase in the risk of arterial events (specifically ischemic stroke6,14 and myocardial infarction7) has been observed in case–control studies for users of second-generation pills and possibly also third-generation preparations.7,14Conflicting information is available regarding the risk of venous and arterial thrombotic events associated with drospirenone. An increased risk of venous thromboembolism, relative to second-generation pills, has been reported recently,8,15,16 whereas two manufacturer-sponsored studies claimed no increase in risk.17,18 In the study reported here, we investigated the risk of venous and arterial thrombotic events among users of various oral contraceptives in a large population-based cohort.  相似文献   

17.

Background

Prioritizing patients using empirically derived access targets can help to ensure high-quality care. Adolescent scoliosis can worsen while patients wait for treatment, increasing the risk of adverse events. Our objective was to determine an empirically derived access target for scoliosis surgery and to compare this with consensus-based targets

Methods

Two-hundred sixteen sequential patients receiving surgery for adolescent idiopathic scoliosis were included in the study. The main outcome was need for additional surgery. Logistic regression modeling was used to evaluate the relation between surgical wait times and adverse events and χ2 analysis was used as the primary analysis for the main outcome.

Results

Of the 88 patients who waited longer than six months for surgery, 13 (14.8%) needed additional surgery due to progression of curvature versus 1.6% (2 of 128 patients) who waited less than six months for surgery (χ2 analysis, p = 0.0001). Patients who waited longer than six months for surgery had greater progression of curvature, longer surgeries and longer stays in hospital. These patients also had less surgical correction than patients who waited less than six months for surgery (Wilcoxon–Mann–Whitney test, p = 0.011). All patients requiring additional surgeries waited longer than three months for their initial surgery. A receiver–operator characteristic curve also suggested a three-month wait as an access target. The adjusted odds ratio for an adverse event for each additional 90 days of waiting from time of consent was 1.81 (95% confidence interval 1.34–2.44). The adjusted odds ratio increased with skeletal immaturity and with the size of the spinal curvature at the time of consent.

Interpretation

A prolonged wait for surgery increased the risk of additional surgical procedures and other adverse events. An empirically derived access target of three months for surgery to treat adolescent idiopathic scoliosis could potentially eliminate the need for additional surgery by reducing progression of curvature. This is a shorter access target than the six months determined by expert consensus.Adolescent idiopathic scoliosis effects just over 2% of females aged 12–14 years.13 Although only 10% of patients require surgery, spinal instrumentation and fusion for adolescent idiopathic scoliosis is the most common procedure done in pediatric orthopaedics.4 Patients who wait too long for scoliosis surgery may require additional surgery such as anterior release to achieve satisfactory correction of the spinal curvature. These patients may also need longer surgeries and may be at increased risk of complications such as increased blood loss, neurologic deficits or inadequate correction of the curvature.514 Furthermore, as seen in other studies of wait times, patients and families can feel anxiety and prolonged suffering while waiting for treatment, which can negatively impact the quality of care.1519 Programs such as the Canadian Pediatric Surgical Wait Times Project have determined a maximal acceptable wait time for adolescent scoliosis through expert consensus (similar to how other surgical wait time targets have been determined).20 Surprisingly, there has been little or no attention given to developing evidence-based access targets or maximal acceptable wait times for most treatments.21 The purpose of this study was to determine the maximal acceptable wait time for surgical correction of adolescent idiopathic scoliosis using an empirically based approach to minimize the possibility of adverse events related to progression of curvature.  相似文献   

18.
19.

Background

Systemic inflammation and dysregulated immune function in chronic obstructive pulmonary disease (COPD) is hypothesized to predispose patients to development of herpes zoster. However, the risk of herpes zoster among patients with COPD is undocumented. We therefore aimed to investigate the risk of herpes zoster among patients with COPD.

Methods

We conducted a cohort study using data from the Taiwan Longitudinal Health Insurance Database. We performed Cox regressions to compare the hazard ratio (HR) of herpes zoster in the COPD cohort and in an age- and sex-matched comparison cohort. We divided the patients with COPD into three groups according to use of steroid medications and performed a further analysis to examine the risk of herpes zoster.

Results

The study included 8486 patients with COPD and 33 944 matched control patients. After adjustment for potential confounding factors, patients with COPD were more likely to have incidents of herpes zoster (adjusted HR 1.68, 95% confidence interval [CI] 1.45–1.95). When compared with the comparison cohort, the adjusted HR of herpes zoster was 1.67 (95% CI 1.43–1.96) for patients with COPD not taking steroid medications. The adjusted HR of herpes zoster was slightly greater for patients with COPD using inhaled corticosteroids only (adjusted HR 2.09, 95% CI 1.38–3.16) and was greatest for patients with COPD using oral steroids (adjusted HR 3.00, 95% CI 2.40–3.75).

Interpretation

Patients with COPD were at increased risk of herpes zoster relative to the general population. The relative risk of herpes zoster was greatest for patients with COPD using oral steroids.Herpes zoster is caused by a reactivation of latent varicella-zoster virus residing in sensory ganglia after an earlier episode of varicella.1 Herpes zoster is characterized by a painful vesicular dermatomal rash. It is commonly complicated with chronic pain (postherpetic neuralgia), resulting in reduced quality of life and functional disability to a degree comparable to that experienced by patients with congestive heart failure, diabetes mellitus and major depression.1,2 Patients with herpes zoster experience more substantial role limitations resulting from emotional and physical problems than do patients with congestive heart failure or diabetes.3 Pain scores for postherpetic neuralgia have been shown to be as high as those for chronic pain from osteoarthritis and rheumatoid arthritis.3 Although aging is the most well-known risk factor for herpes zoster, people with diseases associated with impaired immunity, such as malignancy, HIV infection, diabetes and rheumatic diseases, are also at higher risk for herpes zoster.4,5Chronic obstructive pulmonary disease (COPD) is characterized by progressive airflow limitation that is associated with an abnormal inflammatory response by the small airways and alveoli to inhaled particles and pollutants.6 Disruption of local defence systems (e.g., damage to the innate immune system, impaired mucociliary clearance) predispose patients with COPD to respiratory tract infections. Each infection can cause exacerbation of COPD and further deterioration of lung function, which in turn increase predisposition to infection.7,8There is increasing evidence that COPD is an autoimmune disease, with chronic systemic inflammation involving more than just the airways and lungs.6 Given that various immune-mediated diseases (e.g., rheumatoid arthritis, inflammatory bowel disease) have been reported to be associated with an increased risk of herpes zoster,4,9,10 it is reasonable to hypothesize that the immune dysregulation found in COPD may put patients at higher risk of developing herpes zoster. In addition, inhaled or systemic corticosteroids used for management of COPD can increase susceptibility to herpes zoster by suppressing normal immune function.11 However, data are limited regarding the risk of herpes zoster among patients with COPD.The goal of our study was to investigate whether patients with COPD have a higher incidence of herpes zoster than the general population. In addition, we aimed to examine the risk for herpes zoster with and without steroid therapy among patients with COPD relative to the general population.  相似文献   

20.

Background

Despite safety-related concerns, psychotropic medications are frequently prescribed to manage behavioural symptoms in older adults, particularly those with dementia. We assessed the comparative safety of different classes of psychotropic medications used in nursing home residents.

Methods

We identified a cohort of patients who were aged 65 years or older and had initiated treatment with psychotropics after admission to a nursing home in British Columbia between 1996 and 2006. We used proportional hazards models to compare rates of death and rates of hospital admissions for medical events within 180 days after treatment initiation. We used propensity-score adjustments to control for confounders.

Results

Of 10 900 patients admitted to nursing homes, atypical antipsychotics were initiated by 1942, conventional antipsychotics by 1902, antidepressants by 2169 and benzodiazepines by 4887. Compared with users of atypical antipsychotics, users of conventional antipsychotics and antidepressants had an increased risk of death (rate ratio [RR] 1.47, 95% confidence interval [CI] 1.14–1.91 for conventional antipsychotics and RR 1.20, 95% CI 0.96–1.50 for antidepressants), and an increased risk of femur fracture (RR 1.61, 95% CI 1.03–2.51 for conventional antipsychotics and RR 1.29, 95% CI 0.86–1.94 for antidepressants). Users of benzodiazepines had a higher risk of death (RR 1.28, 95% CI 1.04–1.58) compared with users of atypical antipsychotics. The RR for heart failure was 1.54 (95% CI 0.89–2.67), and for pneumonia it was 0.85 (95% CI 0.56–1.31).

Interpretation

Among older patients admitted to nursing homes, the risks of death and femur fracture associated with conventional antipsychotics, antidepressants and benzodiazepines are comparable to or greater than the risks associated with atypical antipsychotics. Clinicians should weigh these risks against the potential benefits when making prescribing decisions.Despite concerns about their safety, psychotropic medications are used frequently to manage behavioural symptoms in older adults, particularly in those who have dementia. These medications tend to be used because the effectiveness of psychosocial and behavioural interventions remains unclear, and because implementation of those alternate interventions is often hampered by a lack of resources.1 In nursing homes, psychotropic agents are given to up to two-thirds of dementia patients.25The safety of antipsychotic medications in older adults has been called into question. The United States Food and Drug Administration and Health Canada have issued advisories stating that certain atypical antipsychotics (risperidone, olanzapine and aripiprazole) have been associated with an increased risk of stroke and transient ischemic events, and both atypical and conventional antipsychotics have been associated with an increased risk of death.611 Given this problematic safety record, physicians may increasingly resort to alternative psychotropic agents for management of behavioural symptoms in older adults.1,12,13 However, comparative studies of the safety of other classes of psychotropic medications in such patients have not been conducted.In the absence of randomized controlled trials, pharmacoepidemiologic studies using large databases are the best option available for defining the comparative safety of the psychopharmacologic treatment regimens used to manage behavioural symptoms in older adults with dementia. Rigorous methodologic approaches need to be applied to ensure that epidemiologic studies are unbiased by the selective prescribing that occurs in nonrandomized studies.1 We aimed to examine the association between various classes of psychotropic medications and a range of unintended health outcomes among older adults admitted to nursing homes. We focused on patients in nursing homes because use of psychotropic medication is known to be extensive in this setting,25 and medication safety is of particular concern given the complex array of medical illnesses among these patients.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号