首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Schultz AS  Finegan B  Nykiforuk CI  Kvern MA 《CMAJ》2011,183(18):E1334-E1344

Background:

Many hospitals have adopted smoke-free policies on their property. We examined the consequences of such polices at two Canadian tertiary acute-care hospitals.

Methods:

We conducted a qualitative study using ethnographic techniques over a six-month period. Participants (n = 186) shared their perspectives on and experiences with tobacco dependence and managing the use of tobacco, as well as their impressions of the smoke-free policy. We interviewed inpatients individually from eight wards (n = 82), key policy-makers (n = 9) and support staff (n = 14) and held 16 focus groups with health care providers and ward staff (n = 81). We also reviewed ward documents relating to tobacco dependence and looked at smoking-related activities on hospital property.

Results:

Noncompliance with the policy and exposure to secondhand smoke were ongoing concerns. Peoples’ impressions of the use of tobacco varied, including divergent opinions as to whether such use was a bad habit or an addiction. Treatment for tobacco dependence and the management of symptoms of withdrawal were offered inconsistently. Participants voiced concerns over patient safety and leaving the ward to smoke.

Interpretation:

Policies mandating smoke-free hospital property have important consequences beyond noncompliance, including concerns over patient safety and disruptions to care. Without adequately available and accessible support for withdrawal from tobacco, patients will continue to face personal risk when they leave hospital property to smoke.Canadian cities and provinces have passed smoking bans with the goal of reducing people’s exposure to secondhand smoke in workplaces, public spaces and on the property adjacent to public buildings.1,2 In response, Canadian health authorities and hospitals began implementing policies mandating smoke-free hospital property, with the goals of reducing the exposure of workers, patients and visitors to tobacco smoke while delivering a public health message about the dangers of smoking.25 An additional anticipated outcome was the reduced use of tobacco among patients and staff. The impetuses for adopting smoke-free policies include public support for such legislation and the potential for litigation for exposure to second-hand smoke.2,4Tobacco use is a modifiable risk factor associated with a variety of cancers, cardiovascular diseases and respiratory conditions.611 Patients in hospital who use tobacco tend to have more surgical complications and exacerbations of acute and chronic health conditions than patients who do not use tobacco.611 Any policy aimed at reducing exposure to tobacco in hospitals is well supported by evidence, as is the integration of interventions targetting tobacco dependence.12 Unfortunately, most of the nearly five million Canadians who smoke will receive suboptimal treatment,13 as the routine provision of interventions for tobacco dependence in hospital settings is not a practice norm.1416 In smoke-free hospitals, two studies suggest minimal support is offered for withdrawal, 17,18 and one reports an increased use of nicotine-replacement therapy after the implementation of the smoke-free policy.19Assessments of the effectiveness of smoke-free policies for hospital property tend to focus on noncompliance and related issues of enforcement.17,20,21 Although evidence of noncompliance and litter on hospital property2,17,20 implies ongoing exposure to tobacco smoke, half of the participating hospital sites in one study reported less exposure to tobacco smoke within hospital buildings and on the property.18 In addition, there is evidence to suggest some decline in smoking among staff.18,19,21,22We sought to determine the consequences of policies mandating smoke-free hospital property in two Canadian acute-care hospitals by eliciting lived experiences of the people faced with enacting the policies: patients and health care providers. In addition, we elicited stories from hospital support staff and administrators regarding the policies.  相似文献   

2.
3.
Gronich N  Lavi I  Rennert G 《CMAJ》2011,183(18):E1319-E1325

Background:

Combined oral contraceptives are a common method of contraception, but they carry a risk of venous and arterial thrombosis. We assessed whether use of drospirenone was associated with an increase in thrombotic risk relative to third-generation combined oral contraceptives.

Methods:

Using computerized records of the largest health care provider in Israel, we identified all women aged 12 to 50 years for whom combined oral contraceptives had been dispensed between Jan. 1, 2002, and Dec. 31, 2008. We followed the cohort until 2009. We used Poisson regression models to estimate the crude and adjusted rate ratios for risk factors for venous thrombotic events (specifically deep vein thrombosis and pulmonary embolism) and arterial thromboic events (specifically transient ischemic attack and cerebrovascular accident). We performed multivariable analyses to compare types of contraceptives, with adjustment for the various risk factors.

Results:

We identified a total of 1017 (0.24%) venous and arterial thrombotic events among 431 223 use episodes during 819 749 woman-years of follow-up (6.33 venous events and 6.10 arterial events per 10 000 woman-years). In a multivariable model, use of drospirenone carried an increased risk of venous thrombotic events, relative to both third-generation combined oral contraceptives (rate ratio [RR] 1.43, 95% confidence interval [CI] 1.15–1.78) and second-generation combined oral contraceptives (RR 1.65, 95% CI 1.02–2.65). There was no increase in the risk of arterial thrombosis with drospirenone.

Interpretation:

Use of drospirenone-containing oral contraceptives was associated with an increased risk of deep vein thrombosis and pulmonary embolism, but not transient ischemic attack or cerebrovascular attack, relative to second- and third-generation combined oral contraceptives.Oral hormonal therapy is the preferred method of contraception, especially among young women. In the United States in 2002, 12 million women were using “the pill.”1 In a survey of households in Great Britain conducted in 2005 and 2006, one-quarter of women aged 16 to 49 years of age were using this form of contraception.2 A large variety of combined oral contraceptive preparations are available, differing in terms of estrogen dose and in terms of the dose and type of the progestin component. Among preparations currently in use, the estrogen dose ranges from 15 to 35 μg, and the progestins are second-generation, third-generation or newer. The second-generation progestins (levonorgestrel and norgestrel), which are derivatives of testosterone, have differing degrees of androgenic and estrogenic activities. The structure of these agents was modified to reduce the androgenic activity, thus producing the third-generation progestins (desogestrel, gestodene and norgestimate). Newer progestins are chlormadinone acetate, a derivative of progesterone, and drospirenone, an analogue of the aldosterone antagonist spironolactone having antimineralo-corticoid and antiandrogenic activities. Drospirenone is promoted as causing less weight gain and edema than other forms of oral contraceptives, but few well-designed studies have compared the minor adverse effects of these drugs.3The use of oral contraceptives has been reported to confer an increased risk of venous and arterial thrombotic events,47 specifically an absolute risk of venous thrombosis of 6.29 per 10 000 woman-years, compared with 3.01 per 10 000 woman-years among nonusers.8 It has long been accepted that there is a dose–response relationship between estrogen and the risk of venous thrombotic events. Reducing the estrogen dose from 50 μg to 20–30 μg has reduced the risk.9 Studies published since the mid-1990s have suggested a greater risk of venous thrombotic events with third-generation oral contraceptives than with second-generation formulations,1013 indicating that the risk is also progestin-dependent. The pathophysiological mechanism of the risk with different progestins is unknown. A twofold increase in the risk of arterial events (specifically ischemic stroke6,14 and myocardial infarction7) has been observed in case–control studies for users of second-generation pills and possibly also third-generation preparations.7,14Conflicting information is available regarding the risk of venous and arterial thrombotic events associated with drospirenone. An increased risk of venous thromboembolism, relative to second-generation pills, has been reported recently,8,15,16 whereas two manufacturer-sponsored studies claimed no increase in risk.17,18 In the study reported here, we investigated the risk of venous and arterial thrombotic events among users of various oral contraceptives in a large population-based cohort.  相似文献   

4.

Background:

Telehealthcare has the potential to provide care for long-term conditions that are increasingly prevalent, such as asthma. We conducted a systematic review of studies of telehealthcare interventions used for the treatment of asthma to determine whether such approaches to care are effective.

Methods:

We searched the Cochrane Airways Group Specialised Register of Trials, which is derived from systematic searches of bibliographic databases including CENTRAL (the Cochrane Central Register of Controlled Trials), MEDLINE, Embase, CINAHL (Cumulative Index to Nursing and Allied Health Literature) and PsycINFO, as well as other electronic resources. We also searched registers of ongoing and unpublished trials. We were interested in studies that measured the following outcomes: quality of life, number of visits to the emergency department and number of admissions to hospital. Two reviewers identified studies for inclusion in our meta-analysis. We extracted data and used fixedeffect modelling for the meta-analyses.

Results:

We identified 21 randomized controlled trials for inclusion in our analysis. The methods of telehealthcare intervention these studies investigated were the telephone and video- and Internet-based models of care. Meta-analysis did not show a clinically important improvement in patients’ quality of life, and there was no significant change in the number of visits to the emergency department over 12 months. There was a significant reduction in the number of patients admitted to hospital once or more over 12 months (risk ratio 0.25 [95% confidence interval 0.09 to 0.66]).

Interpretation:

We found no evidence of a clinically important impact on patients’ quality of life, but telehealthcare interventions do appear to have the potential to reduce the risk of admission to hospital, particularly for patients with severe asthma. Further research is required to clarify the cost-effectiveness of models of care based on telehealthcare.There has been an increase in the prevalence of asthma in recent decades,13 and the Global Initiative for Asthma estimates that 300 million people worldwide now have the disease.4 The highest prevalence rates (30%) are seen in economically developed countries.58 There has also been an increase in the prevalence of asthma affecting both children and adults in many economically developing and transition countries.911Asthma’s high burden of disease requires improvements in access to treatments.7,12,13 Patterns of help-seeking behaviour are also relevant: delayed reporting is associated with morbidity and the need for emergency care.It is widely believed that telehealthcare interventions may help address some of the challenges posed by asthma by enabling remote delivery of care, facilitating timely access to health advice, supporting self-monitoring and medication concordance, and educating patients on avoiding triggers.1416 The precise role of these technologies in the management of care for people with long-term respiratory conditions needs to be established.17The objective of this study was to systematically review the effectiveness of telehealthcare interventions among people with asthma in terms of quality of life, number of visits to the emergency department and admissions to hospital for exacerbations of asthma.  相似文献   

5.

Background:

The San Francisco Syncope Rule has been proposed as a clinical decision rule for risk stratification of patients presenting to the emergency department with syncope. It has been validated across various populations and settings. We undertook a systematic review of its accuracy in predicting short-term serious outcomes.

Methods:

We identified studies by means of systematic searches in seven electronic databases from inception to January 2011. We extracted study data in duplicate and used a bivariate random-effects model to assess the predictive accuracy and test characteristics.

Results:

We included 12 studies with a total of 5316 patients, of whom 596 (11%) experienced a serious outcome. The prevalence of serious outcomes across the studies varied between 5% and 26%. The pooled estimate of sensitivity of the San Francisco Syncope Rule was 0.87 (95% confidence interval [CI] 0.79–0.93), and the pooled estimate of specificity was 0.52 (95% CI 0.43–0.62). There was substantial between-study heterogeneity (resulting in a 95% prediction interval for sensitivity of 0.55–0.98). The probability of a serious outcome given a negative score with the San Francisco Syncope Rule was 5% or lower, and the probability was 2% or lower when the rule was applied only to patients for whom no cause of syncope was identified after initial evaluation in the emergency department. The most common cause of false-negative classification for a serious outcome was cardiac arrhythmia.

Interpretation:

The San Francisco Syncope Rule should be applied only for patients in whom no cause of syncope is evident after initial evaluation in the emergency department. Consideration of all available electrocardiograms, as well as arrhythmia monitoring, should be included in application of the San Francisco Syncope Rule. Between-study heterogeneity was likely due to inconsistent classification of arrhythmia.Syncope is defined as sudden, transient loss of consciousness with the inability to maintain postural tone, followed by spontaneous recovery and return to pre-existing neurologic function.15 It represents a common clinical problem, accounting for 1%–3% of visits to the emergency department and up to 6% of admissions to acute care hospitals.6,7Assessment of syncope in patients presenting to the emergency department is challenging because of the heterogeneity of underlying pathophysiologic processes and diseases. Although many underlying causes of syncope are benign, others are associated with substantial morbidity or mortality, including cardiac arrhythmia, myocardial infarction, pulmonary embolism and occult hemorrhage.4,810 Consequently, a considerable proportion of patients with benign causes of syncope are admitted for inpatient evaluation.11,12 Therefore, risk stratification that allows for the safe discharge of patients at low risk of a serious outcome is important for efficient management of patients in emergency departments and for reduction of costs associated with unnecessary diagnostic workup.12,13In recent years, various prediction rules based on the probability of an adverse outcome after an episode of syncope have been proposed.3,1416 However, the San Francisco Syncope Rule, derived by Quinn and colleagues in 2004,3 is the only prediction rule for serious outcomes that has been validated in a variety of populations and settings. This simple, five-step clinical decision rule is intended to identify patients at low risk of short-term serious outcomes3,17 (Box 1).

Box 1:

San Francisco Syncope Rule3

AimPrediction of short-term (within 30 days) serious outcomes in patients presenting to the emergency department with syncope.DefinitionsSyncope: Transient loss of consciousness with return to baseline neurologic function. Trauma-associated and alcohol- or drug-related loss of consciousness excluded, as is definite seizure or altered mental status.Serious outcome: Death, myocardial infarction, arrhythmia, pulmonary embolism, stroke, subarachnoid hemorrhage, significant hemorrhage or any condition causing or likely to cause a return visit to the emergency department and admission to hospital for a related event.Selection of predictors in multivariable analysis: Fifty predictor variables were evaluated for significant associations with a serious outcome and combined to create a minimal set of predictors that are highly sensitive and specific for prediction of a serious outcome.Clinical decision ruleFive risk factors, indicated by the mnemonic “CHESS,” were identified to predict patients at high risk of a serious outcome:
  • C – History of congestive heart failure
  • H – Hematocrit < 30%
  • E – Abnormal findings on 12-lead ECG or cardiac monitoring17 (new changes or nonsinus rhythm)
  • S – History of shortness of breath
  • S – Systolic blood pressure < 90 mm Hg at triage
Note: ECG = electrocardiogram.The aim of this study was to conduct a systematic review and meta-analysis of the accuracy of the San Francisco Syncope Rule in predicting short-term serious outcome for patients presenting to the emergency department with syncope.  相似文献   

6.

Background

Inuit have not experienced an epidemic in type 2 diabetes mellitus, and it has been speculated that they may be protected from obesity’s metabolic consequences. We conducted a population-based screening for diabetes among Inuit in the Canadian Arctic and evaluated the association of visceral adiposity with diabetes.

Methods

A total of 36 communities participated in the International Polar Year Inuit Health Survey. Of the 2796 Inuit households approached, 1901 (68%) participated, with 2595 participants. Households were randomly selected, and adult residents were invited to participate. Assessments included anthropometry and fasting plasma lipids and glucose, and, because of survey logistics, only 32% of participants underwent a 75 g oral glucose tolerance test. We calculated weighted prevalence estimates of metabolic risk factors for all participants.

Results

Participants’ mean age was 43.3 years; 35% were obese, 43.8% had an at-risk waist, and 25% had an elevated triglyceride level. Diabetes was identified in 12.2% of participants aged 50 years and older and in 1.9% of those younger than 50 years. A hypertriglyceridemic-waist phenotype was a strong predictor of diabetes (odds ratio [OR] 8.6, 95% confidence interval [CI] 2.1–34.6) in analyses adjusted for age, sex, region, family history of diabetes, education and use of lipid-lowering medications.

Interpretation

Metabolic risk factors were prevalent among Inuit. Our results suggest that Inuit are not protected from the metabolic consequences of obesity, and that their rate of diabetes prevalence is now comparable to that observed in the general Canadian population. Assessment of waist circumference and fasting triglyceride levels could represent an efficient means for identifying Inuit at high risk for diabetes.Indigenous people across the Arctic continue to undergo cultural transitions that affect all dimensions of life, with implications for emerging obesity and changes in patterns of disease burden.13 A high prevalence of obesity among Canadian Inuit has been noted,3,4 and yet studies have suggested that the metabolic consequences of obesity may not be as severe among Inuit as they are in predominantly Caucasian or First Nations populations.46 Conversely, the prevalence of type 2 diabetes mellitus, which was noted to be rare among Inuit in early studies,7,8 now matches or exceeds that of predominately Caucasian comparison populations in Alaska and Greenland.911 However, in Canada, available reports suggest that diabetes prevalence among Inuit remains below that of the general Canadian population.3,12Given the rapid changes in the Arctic and a lack of comprehensive and uniform screening assessments, we used the International Polar Year Inuit Health Survey for Adults 2007–2008 to assess the current prevalence of glycemia and the toll of age and adiposity on glycemia in this population. However, adiposity is heterogeneous, and simple measures of body mass index (BMI) in kg/m2 and waist circumference do not measure visceral adiposity (or intra-abdominal adipose tissue), which is considered more deleterious than subcutaneous fat.13 Therefore, we evaluated the “hypertriglyceridemic-waist” phenotype (i.e., the presence of both an at-risk waist circumference and an elevated triglyceride level) as a proxy indicator of visceral fat.1315  相似文献   

7.

Background:

Moderate alcohol consumption may reduce cardiovascular events, but little is known about its effect on atrial fibrillation in people at high risk of such events. We examined the association between moderate alcohol consumption and the risk of incident atrial fibrillation among older adults with existing cardiovascular disease or diabetes.

Methods:

We analyzed data for 30 433 adults who participated in 2 large antihypertensive drug treatment trials and who had no atrial fibrillation at baseline. The patients were 55 years or older and had a history of cardiovascular disease or diabetes with end-organ damage. We classified levels of alcohol consumption according to median cut-off values for low, moderate and high intake based on guidelines used in various countries, and we defined binge drinking as more than 5 drinks a day. The primary outcome measure was incident atrial fibrillation.

Results:

A total of 2093 patients had incident atrial fibrillation. The age- and sex-standardized incidence rate per 1000 person-years was 14.5 among those with a low level of alcohol consumption, 17.3 among those with a moderate level and 20.8 among those with a high level. Compared with participants who had a low level of consumption, those with higher levels had an increased risk of incident atrial fibrillation (adjusted hazard ratio [HR] 1.14, 95% confidence interval [CI] 1.04–1.26, for moderate consumption; 1.32, 95% CI 0.97–1.80, for high consumption). Results were similar after we excluded binge drinkers. Among those with moderate alcohol consumption, binge drinkers had an increased risk of atrial fibrillation compared with non–binge drinkers (adjusted HR 1.29, 95% CI 1.02–1.62).

Interpretation:

Moderate to high alcohol intake was associated with an increased incidence of atrial fibrillation among people aged 55 or older with cardiovascular disease or diabetes. Among moderate drinkers, the effect of binge drinking on the risk of atrial fibrillation was similar to that of habitual heavy drinking.A trial fibrillation is associated with an increased risk of stroke and a related high burden of mortality and morbidity, both in the general public and among patients with existing cardiovascular disease.1,2 The prevalence of atrial fibrillation increases steadily with age, as do the associated risks, and atrial fibrillation accounts for up to 23.5% of all strokes among elderly people.3Moderate alcohol consumption has been reported to be associated with a reduced risk of cardiovascular disease and all-cause death,1,2 whereas heavy alcohol intake and binge drinking have been associated with an increased risk of stroke,4 cardiovascular disease and all-cause death.5,6 Similarly, heavy drinking and binge drinking are associated with an increased risk of incident atrial fibrillation in the general population.7 However, the association between moderate drinking and incident atrial fibrillation is less consistent and not well understood among older people with existing cardiovascular disease.In this analysis, we examined whether drinking moderate quantities of alcohol, and binge drinking, would be associated with an increased risk of incident atrial fibrillation in a large cohort of people with existing cardiovascular disease or diabetes with end-organ damage who had been followed prospectively in 2 long-term antihypertensive drug treatment trials.  相似文献   

8.

Background:

Setting priorities is critical to ensure guidelines are relevant and acceptable to users, and that time, resources and expertise are used cost-effectively in their development. Stakeholder engagement and the use of an explicit procedure for developing recommendations are critical components in this process.

Methods:

We used a modified Delphi consensus process to select 20 high-priority conditions for guideline development. Canadian primary care practitioners who care for immigrants and refugees used criteria that emphasize inequities in health to identify clinical care gaps.

Results:

Nine infectious diseases were selected, as well as four mental health conditions, three maternal and child health issues, caries and periodontal disease, iron-deficiency anemia, diabetes and vision screening.

Interpretation:

Immigrant and refugee medicine covers the full spectrum of primary care, and although infectious disease continues to be an important area of concern, we are now seeing mental health and chronic diseases as key considerations for recently arriving immigrants and refugees.Canada consistently receives more than 239 000 immigrants yearly, up to 35 000 of whom are refugees.1 Many arrive with similar or better self-reported health than the general Canadian population reports, a phenomenon described as the “healthy immigrant effect.”26 However, subgroups of immigrants, for example refugees, face health disparities and often a greater burden of infectious diseases.7,8 These health issues sometimes differ from the general population because of differing disease exposures, vulnerabilities, social determinants of health and access to health services before, during and after migration. Cultural and linguistic differences combined with lack of evidence-based guidelines can contribute to poor delivery of services.9,10Community-based primary health care practitioners see most of the immigrants and refugees who arrive in Canada. This is not only because Canada’s health system centres on primary care practice, but also because people with lower socioeconomic status, language barriers and less familiarity with the system are much less likely to receive specialist care.11Guideline development can be costly in terms of time, resources and expertise.12 Setting priorities is critical, particularly when dealing with complex situations and limited resources.13 There is no standard algorithm on who should and how they should determine top priorities for guidelines, although burden of illness, feasibility and economic considerations are all important.14 Stakeholder engagement to ensure relevance and acceptability, and the use of an explicit procedure for developing recommendations are critical in guideline development.1517 We chose primary care practitioners, particularly those who care for immigrants and refugees, to help the guideline committee select conditions for clinical preventive guidelines for immigrants and refugees with a focus on the first five years of settlement.  相似文献   

9.

Background:

Although warfarin has been extensively studied in clinical trials, little is known about rates of hemorrhage attributable to its use in routine clinical practice. Our objective was to examine incident hemorrhagic events in a large population-based cohort of patients with atrial fibrillation who were starting treatment with warfarin.

Methods:

We conducted a population-based cohort study involving residents of Ontario (age ≥ 66 yr) with atrial fibrillation who started taking warfarin between Apr. 1, 1997, and Mar. 31, 2008. We defined a major hemorrhage as any visit to hospital for hemorrage. We determined crude rates of hemorrhage during warfarin treatment, overall and stratified by CHADS2 score (congestive heart failure, hypertension, age ≥ 75 yr, diabetes mellitus and prior stroke, transient ischemic attack or thromboembolism).

Results:

We included 125 195 patients with atrial fibrillation who started treatment with warfarin during the study period. Overall, the rate of hemorrhage was 3.8% (95% confidence interval [CI] 3.8%–3.9%) per person-year. The risk of major hemorrhage was highest during the first 30 days of treatment. During this period, rates of hemorrhage were 11.8% (95% CI 11.1%–12.5%) per person-year in all patients and 16.7% (95% CI 14.3%–19.4%) per person-year among patients with a CHADS2 scores of 4 or greater. Over the 5-year follow-up, 10 840 patients (8.7%) visited the hospital for hemorrhage; of these patients, 1963 (18.1%) died in hospital or within 7 days of being discharged.

Interpretation:

In this large cohort of older patients with atrial fibrillation, we found that rates of hemorrhage are highest within the first 30 days of warfarin therapy. These rates are considerably higher than the rates of 1%–3% reported in randomized controlled trials of warfarin therapy. Our study provides timely estimates of warfarin-related adverse events that may be useful to clinicians, patients and policy-makers as new options for treatment become available.Atrial fibrillation is a major risk factor for stroke and systemic embolism, and strong evidence supports the use of the anticoagulant warfarin to reduce this risk.13 However, warfarin has a narrow therapeutic range and requires regular monitoring of the international normalized ratio to optimize its effectiveness and minimize the risk of hemorrhage.4,5 Although rates of major hemorrhage reported in trials of warfarin therapy typically range between 1% and 3% per person-year,611 observational studies suggest that rates may be considerably higher when warfarin is prescribed outside of a clinical trial setting,1215 approaching 7% per person-year in some studies.1315 The different safety profiles derived from clinical trials and observational data may reflect the careful selection of patients, precise definitions of bleeding and close monitoring in the trial setting. Furthermore, although a few observational studies suggest that hemorrhage rates are higher than generally appreciated, these studies involve small numbers of patients who received care in specialized settings.1416 Consequently, the generalizability of their results to general practice may be limited.More information regarding hemorrhage rates during warfarin therapy is particularly important in light of the recent introduction of new oral anticoagulant agents such as dabigatran, rivaroxaban and apixaban, which may be associated with different outcome profiles.1719 There are currently no large studies offering real-world, population-based estimates of hemorrhage rates among patients taking warfarin, which are needed for future comparisons with new anticoagulant agents once they are widely used in routine clinical practice.20We sought to describe the risk of incident hemorrhage in a large population-based cohort of patients with atrial fibrillation who had recently started warfarin therapy.  相似文献   

10.

Background:

Falls cause more than 60% of head injuries in older adults. Lack of objective evidence on the circumstances of these events is a barrier to prevention. We analyzed video footage to determine the frequency of and risk factors for head impact during falls in older adults in 2 long-term care facilities.

Methods:

Over 39 months, we captured on video 227 falls involving 133 residents. We used a validated questionnaire to analyze the mechanisms of each fall. We then examined whether the probability for head impact was associated with upper-limb protective responses (hand impact) and fall direction.

Results:

Head impact occurred in 37% of falls, usually onto a vinyl or linoleum floor. Hand impact occurred in 74% of falls but had no significant effect on the probability of head impact (p = 0.3). An increased probability of head impact was associated with a forward initial fall direction, compared with backward falls (odds ratio [OR] 2.7, 95% confidence interval [CI] 1.3–5.9) or sideways falls (OR 2.8, 95% CI 1.2–6.3). In 36% of sideways falls, residents rotated to land backwards, which reduced the probability of head impact (OR 0.2, 95% CI 0.04–0.8).

Interpretation:

Head impact was common in observed falls in older adults living in long-term care facilities, particularly in forward falls. Backward rotation during descent appeared to be protective, but hand impact was not. Attention to upper-limb strength and teaching rotational falling techniques (as in martial arts training) may reduce fall-related head injuries in older adults.Falls from standing height or lower are the cause of more than 60% of hospital admissions for traumatic brain injury in adults older than 65 years.15 Traumatic brain injury accounts for 32% of hospital admissions and more than 50% of deaths from falls in older adults.1,68 Furthermore, the incidence and age-adjusted rate of fall-related traumatic brain injury is increasing,1,9 especially among people older than 80 years, among whom rates have increased threefold over the past 30 years.10 One-quarter of fall-related traumatic brain injuries in older adults occur in long-term care facilities.1The development of improved strategies to prevent fall-related traumatic brain injuries is an important but challenging task. About 60% of residents in long-term care facilities fall at least once per year,11 and falls result from complex interactions of physiologic, environmental and situational factors.1216 Any fall from standing height has sufficient energy to cause brain injury if direct impact occurs between the head and a rigid floor surface.1719 Improved understanding is needed of the factors that separate falls that result in head impact and injury from those that do not.1,10 Falls in young adults rarely result in head impact, owing to protective responses such as use of the upper limbs to stop the fall, trunk flexion and rotation during descent.2023 We have limited evidence of the efficacy of protective responses to falls among older adults.In the current study, we analyzed video footage of real-life falls among older adults to estimate the prevalence of head impact from falls, and to examine the association between head impact, and biomechanical and situational factors.  相似文献   

11.

Background:

Uncircumcised boys are at higher risk for urinary tract infections than circumcised boys. Whether this risk varies with the visibility of the urethral meatus is not known. Our aim was to determine whether there is a hierarchy of risk among uncircumcised boys whose urethral meatuses are visible to differing degrees.

Methods:

We conducted a prospective cross-sectional study in one pediatric emergency department. We screened 440 circumcised and uncircumcised boys. Of these, 393 boys who were not toilet trained and for whom the treating physician had requested a catheter urine culture were included in our analysis. At the time of catheter insertion, a nurse characterized the visibility of the urethral meatus (phimosis) using a 3-point scale (completely visible, partially visible or nonvisible). Our primary outcome was urinary tract infection, and our primary exposure variable was the degree of phimosis: completely visible versus partially or nonvisible urethral meatus.

Results:

Cultures grew from urine samples from 30.0% of uncircumcised boys with a completely visible meatus, and from 23.8% of those with a partially or nonvisible meatus (p = 0.4). The unadjusted odds ratio (OR) for culture growth was 0.73 (95% confidence interval [CI] 0.35–1.52), and the adjusted OR was 0.41 (95% CI 0.17–0.95). Of the boys who were circumcised, 4.8% had urinary tract infections, which was significantly lower than the rate among uncircumcised boys with a completely visible urethral meatus (unadjusted OR 0.12 [95% CI 0.04–0.39], adjusted OR 0.07 [95% CI 0.02–0.26]).

Interpretation:

We did not see variation in the risk of urinary tract infection with the visibility of the urethral meatus among uncircumcised boys. Compared with circumcised boys, we saw a higher risk of urinary tract infection in uncircumcised boys, irrespective of urethral visibility.Urinary tract infections are one of the most common serious bacterial infections in young children.16 Prompt diagnosis is important, because children with urinary tract infection are at risk for bacteremia6 and renal scarring.1,7 Uncircumcised boys have a much higher risk of urinary tract infection than circumcised boys,1,3,4,6,812 likely as a result of heavier colonization under the foreskin with pathogenic bacteria, which leads to ascending infections.13,14 The American Academy of Pediatrics recently suggested that circumcision status be used to select which boys should be evaluated for urinary tract infection.1 However, whether all uncircumcised boys are at equal risk for infection, or whether the risk varies with the visibility of the urethral opening, is not known. It has been suggested that a subset of uncircumcised boys with a poorly visible urethral opening are at increased risk of urinary tract infection,1517 leading some experts to consider giving children with tight foreskins topical cortisone or circumcision to prevent urinary tract infections.13,1821We designed a study to challenge the opinion that all uncircumcised boys are at increased risk for urinary tract infections. We hypothesized a hierarchy of risk among uncircumcised boys depending on the visibility of the urethral meatus, with those with a partially or nonvisible meatus at highest risk, and those with a completely visible meatus having a level of risk similar to that of boys who have been circumcised. Our primary aim was to compare the proportions of urinary tract infections among uncircumcised boys with a completely visible meatus with those with a partially or nonvisible meatus.  相似文献   

12.

Background:

Polymyalgia rheumatica is one of the most common inflammatory rheumatologic conditions in older adults. Other inflammatory rheumatologic disorders are associated with an excess risk of vascular disease. We investigated whether polymyalgia rheumatica is associated with an increased risk of vascular events.

Methods:

We used the General Practice Research Database to identify patients with a diagnosis of incident polymyalgia rheumatica between Jan. 1, 1987, and Dec. 31, 1999. Patients were matched by age, sex and practice with up to 5 patients without polymyalgia rheumatica. Patients were followed until their first vascular event (cardiovascular, cerebrovascular, peripheral vascular) or the end of available records (May 2011). All participants were free of vascular disease before the diagnosis of polymyalgia rheumatica (or matched date). We used Cox regression models to compare time to first vascular event in patients with and without polymyalgia rheumatica.

Results:

A total of 3249 patients with polymyalgia rheumatica and 12 735 patients without were included in the final sample. Over a median follow-up period of 7.8 (interquartile range 3.3–12.4) years, the rate of vascular events was higher among patients with polymyalgia rheumatica than among those without (36.1 v. 12.2 per 1000 person-years; adjusted hazard ratio 2.6, 95% confidence interval 2.4–2.9). The increased risk of a vascular event was similar for each vascular disease end point. The magnitude of risk was higher in early disease and in patients younger than 60 years at diagnosis.

Interpretation:

Patients with polymyalgia rheumatica have an increased risk of vascular events. This risk is greatest in the youngest age groups. As with other forms of inflammatory arthritis, patients with polymyalgia rheumatica should have their vascular risk factors identified and actively managed to reduce this excess risk.Inflammatory rheumatologic disorders such as rheumatoid arthritis,1,2 systemic lupus erythematosus,2,3 gout,4 psoriatic arthritis2,5 and ankylosing spondylitis2,6 are associated with an increased risk of vascular disease, especially cardiovascular disease, leading to substantial morbidity and premature death.26 Recognition of this excess vascular risk has led to management guidelines advocating screening for and management of vascular risk factors.79Polymyalgia rheumatica is one of the most common inflammatory rheumatologic conditions in older adults,10 with a lifetime risk of 2.4% for women and 1.7% for men.11 To date, evidence regarding the risk of vascular disease in patients with polymyalgia rheumatica is unclear. There are a number of biologically plausible mechanisms between polymyalgia rheumatica and vascular disease. These include the inflammatory burden of the disease,12,13 the association of the disease with giant cell arteritis (causing an inflammatory vasculopathy, which may lead to subclinical arteritis, stenosis or aneurysms),14 and the adverse effects of long-term corticosteroid treatment (e.g., diabetes, hypertension and dyslipidemia).15,16 Paradoxically, however, use of corticosteroids in patients with polymyalgia rheumatica may actually decrease vascular risk by controlling inflammation.17 A recent systematic review concluded that although some evidence exists to support an association between vascular disease and polymyalgia rheumatica,18 the existing literature presents conflicting results, with some studies reporting an excess risk of vascular disease19,20 and vascular death,21,22 and others reporting no association.2326 Most current studies are limited by poor methodologic quality and small samples, and are based on secondary care cohorts, who may have more severe disease, yet most patients with polymyalgia rheumatica receive treatment exclusively in primary care.27The General Practice Research Database (GPRD), based in the United Kingdom, is a large electronic system for primary care records. It has been used as a data source for previous studies,28 including studies on the association of inflammatory conditions with vascular disease29 and on the epidemiology of polymyalgia rheumatica in the UK.30 The aim of the current study was to examine the association between polymyalgia rheumatica and vascular disease in a primary care population.  相似文献   

13.

Background

Prioritizing patients using empirically derived access targets can help to ensure high-quality care. Adolescent scoliosis can worsen while patients wait for treatment, increasing the risk of adverse events. Our objective was to determine an empirically derived access target for scoliosis surgery and to compare this with consensus-based targets

Methods

Two-hundred sixteen sequential patients receiving surgery for adolescent idiopathic scoliosis were included in the study. The main outcome was need for additional surgery. Logistic regression modeling was used to evaluate the relation between surgical wait times and adverse events and χ2 analysis was used as the primary analysis for the main outcome.

Results

Of the 88 patients who waited longer than six months for surgery, 13 (14.8%) needed additional surgery due to progression of curvature versus 1.6% (2 of 128 patients) who waited less than six months for surgery (χ2 analysis, p = 0.0001). Patients who waited longer than six months for surgery had greater progression of curvature, longer surgeries and longer stays in hospital. These patients also had less surgical correction than patients who waited less than six months for surgery (Wilcoxon–Mann–Whitney test, p = 0.011). All patients requiring additional surgeries waited longer than three months for their initial surgery. A receiver–operator characteristic curve also suggested a three-month wait as an access target. The adjusted odds ratio for an adverse event for each additional 90 days of waiting from time of consent was 1.81 (95% confidence interval 1.34–2.44). The adjusted odds ratio increased with skeletal immaturity and with the size of the spinal curvature at the time of consent.

Interpretation

A prolonged wait for surgery increased the risk of additional surgical procedures and other adverse events. An empirically derived access target of three months for surgery to treat adolescent idiopathic scoliosis could potentially eliminate the need for additional surgery by reducing progression of curvature. This is a shorter access target than the six months determined by expert consensus.Adolescent idiopathic scoliosis effects just over 2% of females aged 12–14 years.13 Although only 10% of patients require surgery, spinal instrumentation and fusion for adolescent idiopathic scoliosis is the most common procedure done in pediatric orthopaedics.4 Patients who wait too long for scoliosis surgery may require additional surgery such as anterior release to achieve satisfactory correction of the spinal curvature. These patients may also need longer surgeries and may be at increased risk of complications such as increased blood loss, neurologic deficits or inadequate correction of the curvature.514 Furthermore, as seen in other studies of wait times, patients and families can feel anxiety and prolonged suffering while waiting for treatment, which can negatively impact the quality of care.1519 Programs such as the Canadian Pediatric Surgical Wait Times Project have determined a maximal acceptable wait time for adolescent scoliosis through expert consensus (similar to how other surgical wait time targets have been determined).20 Surprisingly, there has been little or no attention given to developing evidence-based access targets or maximal acceptable wait times for most treatments.21 The purpose of this study was to determine the maximal acceptable wait time for surgical correction of adolescent idiopathic scoliosis using an empirically based approach to minimize the possibility of adverse events related to progression of curvature.  相似文献   

14.

Background:

Little evidence exists on the effect of an energy-unrestricted healthy diet on metabolic syndrome. We evaluated the long-term effect of Mediterranean diets ad libitum on the incidence or reversion of metabolic syndrome.

Methods:

We performed a secondary analysis of the PREDIMED trial — a multicentre, randomized trial done between October 2003 and December 2010 that involved men and women (age 55–80 yr) at high risk for cardiovascular disease. Participants were randomly assigned to 1 of 3 dietary interventions: a Mediterranean diet supplemented with extra-virgin olive oil, a Mediterranean diet supplemented with nuts or advice on following a low-fat diet (the control group). The interventions did not include increased physical activity or weight loss as a goal. We analyzed available data from 5801 participants. We determined the effect of diet on incidence and reversion of metabolic syndrome using Cox regression analysis to calculate hazard ratios (HRs) and 95% confidence intervals (CIs).

Results:

Over 4.8 years of follow-up, metabolic syndrome developed in 960 (50.0%) of the 1919 participants who did not have the condition at baseline. The risk of developing metabolic syndrome did not differ between participants assigned to the control diet and those assigned to either of the Mediterranean diets (control v. olive oil HR 1.10, 95% CI 0.94–1.30, p = 0.231; control v. nuts HR 1.08, 95% CI 0.92–1.27, p = 0.3). Reversion occurred in 958 (28.2%) of the 3392 participants who had metabolic syndrome at baseline. Compared with the control group, participants on either Mediterranean diet were more likely to undergo reversion (control v. olive oil HR 1.35, 95% CI 1.15–1.58, p < 0.001; control v. nuts HR 1.28, 95% CI 1.08–1.51, p < 0.001). Participants in the group receiving olive oil supplementation showed significant decreases in both central obesity and high fasting glucose (p = 0.02); participants in the group supplemented with nuts showed a significant decrease in central obesity.

Interpretation:

A Mediterranean diet supplemented with either extra virgin olive oil or nuts is not associated with the onset of metabolic syndrome, but such diets are more likely to cause reversion of the condition. An energy-unrestricted Mediterranean diet may be useful in reducing the risks of central obesity and hyperglycemia in people at high risk of cardiovascular disease. Trial registration: ClinicalTrials.gov, no. ISRCTN35739639.Metabolic syndrome is a cluster of 3 or more related cardiometabolic risk factors: central obesity (determined by waist circumference), hypertension, hypertriglyceridemia, low plasma high-density lipoprotein (HDL) cholesterol levels and hyperglycemia. Having the syndrome increases a person’s risk for type 2 diabetes and cardiovascular disease.1,2 In addition, the condition is associated with increased morbidity and all-cause mortality.1,35 The worldwide prevalence of metabolic syndrome in adults approaches 25%68 and increases with age,7 especially among women,8,9 making it an important public health issue.Several studies have shown that lifestyle modifications,10 such as increased physical activity,11 adherence to a healthy diet12,13 or weight loss,1416 are associated with reversion of the metabolic syndrome and its components. However, little information exists as to whether changes in the overall dietary pattern without weight loss might also be effective in preventing and managing the condition.The Mediterranean diet is recognized as one of the healthiest dietary patterns. It has shown benefits in patients with cardiovascular disease17,18 and in the prevention and treatment of related conditions, such as diabetes,1921 hypertension22,23 and metabolic syndrome.24Several cross-sectional2529 and prospective3032 epidemiologic studies have suggested an inverse association between adherence to the Mediterranean diet and the prevalence or incidence of metabolic syndrome. Evidence from clinical trials has shown that an energy-restricted Mediterranean diet33 or adopting a Mediterranean diet after weight loss34 has a beneficial effect on metabolic syndrome. However, these studies did not determine whether the effect could be attributed to the weight loss or to the diets themselves.Seminal data from the PREDIMED (PREvención con DIeta MEDiterránea) study suggested that adherence to a Mediterranean diet supplemented with nuts reversed metabolic syndrome more so than advice to follow a low-fat diet.35 However, the report was based on data from only 1224 participants followed for 1 year. We have analyzed the data from the final PREDIMED cohort after a median follow-up of 4.8 years to determine the long-term effects of a Mediterranean diet on metabolic syndrome.  相似文献   

15.

Background:

Although statins have been shown to reduce the risk of cardiovascular events in patients at low cardiovascular risk, their absolute benefit is small in the short term, which may adversely affect cost-effectiveness. We sought to determine the long-term cost-effectiveness (beyond the duration of clinical trials) of low- and high-potency statins in patients at low cardiovascular risk and to estimate the impact on Canada’s publicly funded health care system.

Methods:

Using Markov modelling, we performed a cost-utility analysis in which we compared low-potency statins (fluvastatin, lovastatin, pravastatin and simvastatin) and high-potency statins (atorvastatin and rosuvastatin) with no statins in a simulated cohort of low-risk patients over a lifetime horizon. Model outcomes included costs (in 2010 Canadian dollars), quality-adjusted life-years (QALYs) gained and the cost per QALY gained.

Results:

Over a lifetime horizon, the cost of managing a patient at low cardiovascular risk was estimated to be about $10 100 without statins, $15 200 with low-potency statins and $16 400 with high-potency statins. The cost per QALY gained with high-potency statins (v. no statins) was $21 300; the use of low-potency statins was not considered economically attractive. These results were robust to sensitivity analyses, although their use became economically unattractive when the duration of benefit from statin use was assumed to be less than 10 years.

Interpretation:

Use of high-potency statins in patients at low cardiovascular risk was associated with a cost per QALY gained that was economically attractive by current standards, assuming that the benefit from statin use would continue for at least 10 years. However, the overall expenditure on statins would be substantial, and the ramifications of this practice should be carefully considered by policy-makers.Although statins improve survival and reduce the risk of cardiovascular events in populations at high and moderate risk,1 their effectiveness and cost-effectiveness in low-risk populations is less certain.2 This uncertainly is due in part to low-risk patients being less likely to have cardiovascular events over the short term. For instance, in the recent Justification for the Use of Statins in Prevention: an Intervention Trial Evaluating Rosuvastatin (JUPITER) study3 — a large randomized trial comparing cardiovascular outcomes in low-risk patients randomly assigned to receive either rosuvastatin or placebo — the risk of death or nonfatal myocardial infarction over three years was 2.5% in the rosuvastatin group and 3.5% in the placebo group, which represented a large relative, but small absolute, risk reduction in cardiovascular events.Other cholesterol-lowering interventions are available, such as diet, exercise and the use of other hypolipidemic agents, but the use of statins is the only such intervention known to reduce cardiovascular risk in people with low and high blood cholesterol levels.47 Thus, statins are now primarily indicated for the reduction of cardiovascular risk instead of being used mainly for the management of hypercholesterolemia.8With this broadening indication for use, expenditures on statins have increased and represent about 13% of total expenditures by provincial formularies in Canada.9 The absolute number of people at low cardiovascular risk who are taking statins has increased substantially over the last decade, driven by the large number of low-risk people in the general population.10 In addition, statins that are more effective in lowering low-density lipoprotein (LDL) cholesterol levels have become available.3,11 These high-potency statins (atorvastatin and rosuvastatin) are substantially more expensive than low-potency statins available as generics (pravastatin, simvastatin, fluvastatin and lovastatin), although atorvastatin has recently become available as a generic in Canada.12 Increasing costs and concerns over the absolute benefit of statins in people at low cardiovascular risk has raised concerns about the cost-effectiveness of statins in this group.We performed an incremental cost-utility analysis comparing low- and high-potency statins with no statins in patients at low cardiovascular risk in a Canadian setting. We used findings from our group’s recent systematic review of the efficacy of statins for primary prevention in low-risk people13 as well as observational data from a large provincial registry of patients documenting existing statin use. Our objective was to determine which strategy represents the best use of health care resources for the publicly funded health care system, and what investment would be required to fund statins.  相似文献   

16.

Background:

Persistent postoperative pain continues to be an underrecognized complication. We examined the prevalence of and risk factors for this type of pain after cardiac surgery.

Methods:

We enrolled patients scheduled for coronary artery bypass grafting or valve replacement, or both, from Feb. 8, 2005, to Sept. 1, 2009. Validated measures were used to assess (a) preoperative anxiety and depression, tendency to catastrophize in the face of pain, health-related quality of life and presence of persistent pain; (b) pain intensity and interference in the first postoperative week; and (c) presence and intensity of persistent postoperative pain at 3, 6, 12 and 24 months after surgery. The primary outcome was the presence of persistent postoperative pain during 24 months of follow-up.

Results:

A total of 1247 patients completed the preoperative assessment. Follow-up retention rates at 3 and 24 months were 84% and 78%, respectively. The prevalence of persistent postoperative pain decreased significantly over time, from 40.1% at 3 months to 22.1% at 6 months, 16.5% at 12 months and 9.5% at 24 months; the pain was rated as moderate to severe in 3.6% at 24 months. Acute postoperative pain predicted both the presence and severity of persistent postoperative pain. The more intense the pain during the first week after surgery and the more it interfered with functioning, the more likely the patients were to report persistent postoperative pain. Pre-existing persistent pain and increased preoperative anxiety also predicted the presence of persistent postoperative pain.

Interpretation:

Persistent postoperative pain of nonanginal origin after cardiac surgery affected a substantial proportion of the study population. Future research is needed to determine whether interventions to modify certain risk factors, such as preoperative anxiety and the severity of pain before and immediately after surgery, may help to minimize or prevent persistent postoperative pain.Postoperative pain that persists beyond the normal time for tissue healing (> 3 mo) is increasingly recognized as an important complication after various types of surgery and can have serious consequences on patients’ daily living.13 Cardiac surgeries, such as coronary artery bypass grafting (CABG) and valve replacement, rank among the most frequently performed interventions worldwide.4 They aim to improve survival and quality of life by reducing symptoms, including anginal pain. However, persistent postoperative pain of nonanginal origin has been reported in 7% to 60% of patients following these surgeries.523 Such variability is common in other types of major surgery and is due mainly to differences in the definition of persistent postoperative pain, study design, data collection methods and duration of follow-up.13,24Few prospective cohort studies have examined the exact time course of persistent postoperative pain after cardiac surgery, and follow-up has always been limited to a year or less.9,14,25 Factors that put patients at risk of this type of problem are poorly understood.26 Studies have reported inconsistent results regarding the contribution of age, sex, body mass index, preoperative angina, surgical technique, grafting site, postoperative complications or level of opioid consumption after surgery.57,9,13,14,1619,2123,25,27 Only 1 study investigated the role of chronic nonanginal pain before surgery as a contributing factor;21 5 others prospectively assessed the association between persistent postoperative pain and acute pain intensity in the first postoperative week but reported conflicting results.13,14,21,22,25 All of the above studies were carried out in a single hospital and included relatively small samples. None of the studies examined the contribution of psychological factors such as levels of anxiety and depression before cardiac surgery, although these factors have been shown to influence acute or persistent postoperative pain in other types of surgery.1,24,28,29We conducted a prospective multicentre cohort study (the CARD-PAIN study) to determine the prevalence of persistent postoperative pain of nonanginal origin up to 24 months after cardiac surgery and to identify risk factors for the presence and severity of the condition.  相似文献   

17.

Background:

Not enough is known about the association between practice size and clinical outcomes in primary care. We examined this association between 1997 and 2005, in addition to the impact of the Quality and Outcomes Framework, a pay-for-performance incentive scheme introduced in the United Kingdom in 2004, on diabetes management.

Methods:

We conducted a retrospective open-cohort study using data from the General Practice Research Database. We enrolled 422 general practices providing care for 154 945 patients with diabetes. Our primary outcome measures were the achievement of national treatment targets for blood pressure, glycated hemoglobin (HbA1c) levels and total cholesterol.

Results:

We saw improvements in the recording of process of care measures, prescribing and achieving intermediate outcomes in all practice sizes during the study period. We saw improvement in reaching national targets after the introduction of the Quality and Outcomes Framework. These improvements significantly exceeded the underlying trends in all practice sizes for achieving targets for cholesterol level and blood pressure, but not for HbA1c level. In 1997 and 2005, there were no significant differences between the smallest and largest practices in achieving targets for blood pressure (1997 odds ratio [OR] 0.98, 95% confidence interval [CI] 0.82 to 1.16; 2005 OR 0.92, 95% CI 0.80 to 1.06 in 2005), cholesterol level (1997 OR 0.94, 95% CI 0.76 to 1.16; 2005 OR 1.1, 95% CI 0.97 to 1.40) and glycated hemoglobin level (1997 OR 0.79, 95% CI 0.55 to 1.14; 2005 OR 1.05, 95% CI 0.93 to 1.19).

Interpretation:

We found no evidence that size of practice is associated with the quality of diabetes management in primary care. Pay-for-performance programs appear to benefit both large and small practices to a similar extent.There is a well-established body of literature showing positive associations between volume of patients and clinical outcomes in health care, which has been documented by a systematic review.1 However, this association has usually been examined in a limited number of discrete procedures, and most studies have involved hospital-based services rather than primary care settings.25Improving our understanding of the association between volume of patients and outcomes in primary care is important for several reasons. First, most contacts with health systems occur in primary care settings, and optimizing the delivery of these services has the potential to improve the health of the population.6 Second, over the past decade, primary care has assumed greater responsibility for managing the growing burden of chronic disease.7,8 Larger providers may be better resourced, through the employment of additional support staff and greater use of information technology, to deliver the systematic, structured care necessary for the effective management of chronic disease.6,9 Third, larger providers may have been more responsive to nonfinancial and financial incentives, including pay for performance, implemented by payers aimed at improving the quality of care.7,10 Fourth, in many countries, primary care is based around a predominance of small practices.6,11,12 In 2006, 53% of practices in England and Wales had three or fewer family physicians.11 In the same year in the United States, 30.3% of family physicians were in solo practice; 9.4% were in two-physician practices.12Despite the limited data available, concerns have been raised about the standards of care delivered by smaller family practices.13 In the United Kingdom and Canada, this has resulted in an explicit policy objective of encouraging smaller practices to amalgamate.13,14Our study examines the associations between the size of practice and the quality of diabetes management in UK primary care settings between 1997 and 2005. We tested the hypotheses that patients attending larger family practices receive better care for diabetes and that the quality gap between larger and smaller practices has increased over the past decade. We also hypothesized that larger practices derived more benefit from the Quality and Outcomes Framework, a major pay-for-performance program in primary care introduced in 2004.  相似文献   

18.
Riediger ND  Clara I 《CMAJ》2011,183(15):E1127-E1134

Background:

Metabolic syndrome refers to a constellation of conditions that increases a person’s risk of diabetes and cardiovascular disease. We describe the prevalence of metabolic syndrome and its components in relation to sociodemographic factors in the Canadian adult population.

Methods:

We used data from cycle 1 of the Canadian Health Measures Survey, a cross-sectional survey of a representative sample of the population. We included data for respondents aged 18 years and older for whom fasting blood samples were available; pregnant women were excluded. We calculated weighted estimates of the prevalence of metabolic syndrome and its components in relation to age, sex, education level and income.

Results:

The estimated prevalence of metabolic syndrome was 19.1%. Age was the strongest predictor of the syndrome: 17.0% of participants 18–39 years old had metabolic syndrome, as compared with 39.0% of those 70–79 years. Abdominal obesity was the most common component of the syndrome (35.0%) and was more prevalent among women than among men (40.0% v. 29.1%; p = 0.013). Men were more likely than women to have an elevated fasting glucose level (18.9% v. 13.6%; p = 0.025) and hypertriglyceridemia (29.0% v. 20.0%; p = 0.012). The prevalence of metabolic syndrome was higher among people in households with lower education and income levels.

Interpretation:

About one in five Canadian adults had metabolic syndrome. People at increased risk were those in households with lower education and income levels. The burden of abdominal obesity, low HDL (high-density lipoprotein) cholesterol and hypertriglyceridemia among young people was especially of concern, because the risk of cardiovascular disease increases with age.Chronic disease contributes significantly to morbidity and mortality in the Canadian population.1 As such, the economic costs are substantial. Metabolic syndrome refers to a constellation of conditions that approximately doubles a person’s risk of cardiovascular disease, independently of other risk factors.25 The cause of metabolic syndrome has not been fully elucidated; a summary of the current proposed mechanisms is discussed elsewhere.6Several sets of criteria have been established for the detection of metabolic syndrome, many of which have been continually updated.68 The set of criteria most commonly used in the past was published in the third report of the National Cholesterol Education Program Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults (Adult Treatment Panel III criteria).9 Recently, the International Diabetes Federation, the American Heart Association, the National Heart, Lung, and Blood Institute, and other organizations collaborated to release a unified set of criteria.10The Canadian Health Measures Survey, conducted in 2007–2009, was the first cross-sectional survey of a representative sample of Canadians that collected biological samples since the Canadian Heart Health Surveys about 20 years ago.11 We used data from the Canadian Health Measures Survey to describe the prevalence of metabolic syndrome and its components by age, sex, education level and income adequacy in a sample of the Canadian adult population. Because different studies have used various criteria in the past to define metabolic syndrome, and because there is continuing controversy as to the appropriate criteria, we calculated the prevalence according to several types of criteria to better facilitate comparison to findings from past and future studies.  相似文献   

19.
20.

Background

Systemic inflammation and dysregulated immune function in chronic obstructive pulmonary disease (COPD) is hypothesized to predispose patients to development of herpes zoster. However, the risk of herpes zoster among patients with COPD is undocumented. We therefore aimed to investigate the risk of herpes zoster among patients with COPD.

Methods

We conducted a cohort study using data from the Taiwan Longitudinal Health Insurance Database. We performed Cox regressions to compare the hazard ratio (HR) of herpes zoster in the COPD cohort and in an age- and sex-matched comparison cohort. We divided the patients with COPD into three groups according to use of steroid medications and performed a further analysis to examine the risk of herpes zoster.

Results

The study included 8486 patients with COPD and 33 944 matched control patients. After adjustment for potential confounding factors, patients with COPD were more likely to have incidents of herpes zoster (adjusted HR 1.68, 95% confidence interval [CI] 1.45–1.95). When compared with the comparison cohort, the adjusted HR of herpes zoster was 1.67 (95% CI 1.43–1.96) for patients with COPD not taking steroid medications. The adjusted HR of herpes zoster was slightly greater for patients with COPD using inhaled corticosteroids only (adjusted HR 2.09, 95% CI 1.38–3.16) and was greatest for patients with COPD using oral steroids (adjusted HR 3.00, 95% CI 2.40–3.75).

Interpretation

Patients with COPD were at increased risk of herpes zoster relative to the general population. The relative risk of herpes zoster was greatest for patients with COPD using oral steroids.Herpes zoster is caused by a reactivation of latent varicella-zoster virus residing in sensory ganglia after an earlier episode of varicella.1 Herpes zoster is characterized by a painful vesicular dermatomal rash. It is commonly complicated with chronic pain (postherpetic neuralgia), resulting in reduced quality of life and functional disability to a degree comparable to that experienced by patients with congestive heart failure, diabetes mellitus and major depression.1,2 Patients with herpes zoster experience more substantial role limitations resulting from emotional and physical problems than do patients with congestive heart failure or diabetes.3 Pain scores for postherpetic neuralgia have been shown to be as high as those for chronic pain from osteoarthritis and rheumatoid arthritis.3 Although aging is the most well-known risk factor for herpes zoster, people with diseases associated with impaired immunity, such as malignancy, HIV infection, diabetes and rheumatic diseases, are also at higher risk for herpes zoster.4,5Chronic obstructive pulmonary disease (COPD) is characterized by progressive airflow limitation that is associated with an abnormal inflammatory response by the small airways and alveoli to inhaled particles and pollutants.6 Disruption of local defence systems (e.g., damage to the innate immune system, impaired mucociliary clearance) predispose patients with COPD to respiratory tract infections. Each infection can cause exacerbation of COPD and further deterioration of lung function, which in turn increase predisposition to infection.7,8There is increasing evidence that COPD is an autoimmune disease, with chronic systemic inflammation involving more than just the airways and lungs.6 Given that various immune-mediated diseases (e.g., rheumatoid arthritis, inflammatory bowel disease) have been reported to be associated with an increased risk of herpes zoster,4,9,10 it is reasonable to hypothesize that the immune dysregulation found in COPD may put patients at higher risk of developing herpes zoster. In addition, inhaled or systemic corticosteroids used for management of COPD can increase susceptibility to herpes zoster by suppressing normal immune function.11 However, data are limited regarding the risk of herpes zoster among patients with COPD.The goal of our study was to investigate whether patients with COPD have a higher incidence of herpes zoster than the general population. In addition, we aimed to examine the risk for herpes zoster with and without steroid therapy among patients with COPD relative to the general population.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号