首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Background

Recent studies have reported a trend toward earlier initiation of dialysis (i.e., at higher levels of glomerular filtration rate) and an association between early initiation and increased risk of death. We examined trends in initiation of hemodialysis within Canada and compared the risk of death between patients with early and late initiation of dialysis.

Methods

The analytic cohort consisted of 25 910 patients at least 18 years of age who initiated hemodialysis, as identified from the Canadian Organ Replacement Register (2001–2007). We defined the initiation of dialysis as early if the estimated glomerular filtration rate was greater than 10.5 mL/min per 1.73 m2. We fitted time-dependent proportional-hazards Cox models to compare the risk of death between patients with early and late initiation of dialysis.

Results

Between 2001 and 2007, mean estimated glomerular filtration rate at initiation of dialysis increased from 9.3 (standard deviation [SD] 5.2) to 10.2 (SD 7.1) (p < 0.001), and the proportion of early starts rose from 28% (95% confidence interval [CI] 27%–30%) to 36% (95% CI 34%–37%). Mean glomerular filtration rate was 15.5 (SD 7.7) mL/min per 1.73 m2 among those with early initiation and 7.1 (SD 2.0) mL/min per 1.73 m2 among those with late initiation. The unadjusted hazard ratio (HR) for mortality with early relative to late initiation was 1.48 (95% CI 1.43–1.54). The HR decreased to 1.18 (95% CI 1.13–1.23) after adjustment for demographic characteristics, serum albumin, primary cause of end-stage renal disease, vascular access type, comorbidities, late referral and transplant status. The mortality differential between early and late initiation per 1000 patient-years narrowed after one year of follow-up, but never crossed and began widening again after 24 months of follow-up. The differences were significant at 6, 12, 30 and 36 months.

Interpretation

In Canada, dialysis is being initiated at increasingly higher levels of glomerular filtration rate. A higher glomerular filtration rate at initiation of dialysis is associated with an increased risk of death that is not fully explained by differences in baseline characteristics.Examination of dialysis registry data, both in the United States and Europe, has shown that this procedure is being initiated for patients with increasingly higher levels of estimated glomerular filtration rate.14 Although the indications for dialysis are often not recorded or not accessible for data-gathering, a large international survey of physicians in 2000 revealed that the two leading determinants of early initiation were uremic signs and symptoms (38%) and residual kidney function (32%), with 90% of respondents indicating that initiating dialysis earlier (by 6–12 months) would give some major advantage in terms of outcomes.5 The timing of initiation of dialysis is based on clinical judgment and the interpretation of signs, symptoms and laboratory test results, with possibly greater emphasis recently on estimated glomerular filtration rate following the introduction of national guidelines.6,7 These guidelines were influenced by studies conducted in the 1980s and 1990s, which suggested that late initiation was potentially harmful.8,9 In 2006, the US National Kidney Foundation suggested that initiation of dialysis be considered before stage 5 chronic kidney disease (estimated glomerular filtration rate < 15 mL/min per 1.73 m2) if symptoms were related to both comorbidities and level of residual kidney function.10 The guidelines were based on existing observational information and never advocated initiating dialysis at a specific value of estimated glomerular filtration rate. However, studies conducted since 2001 have shown no survival benefit with initiation of hemodialysis at higher values of estimated glomerular filtration rate.4,1114Using data from the Canadian Organ Replacement Register, we examined trends in the timing of hemodialysis initiation between 2001 and 2007, characterized patients with early and late initiation of dialysis and compared the risk of death between these groups over time while controlling for baseline imbalances. In this article, we discuss the confounding effects of selection, survivor, misclassification, lead-time and indication bias.  相似文献   

2.

Background

The pathogenesis of appendicitis is unclear. We evaluated whether exposure to air pollution was associated with an increased incidence of appendicitis.

Methods

We identified 5191 adults who had been admitted to hospital with appendicitis between Apr. 1, 1999, and Dec. 31, 2006. The air pollutants studied were ozone, nitrogen dioxide, sulfur dioxide, carbon monoxide, and suspended particulate matter of less than 10 μ and less than 2.5 μ in diameter. We estimated the odds of appendicitis relative to short-term increases in concentrations of selected pollutants, alone and in combination, after controlling for temperature and relative humidity as well as the effects of age, sex and season.

Results

An increase in the interquartile range of the 5-day average of ozone was associated with appendicitis (odds ratio [OR] 1.14, 95% confidence interval [CI] 1.03–1.25). In summer (July–August), the effects were most pronounced for ozone (OR 1.32, 95% CI 1.10–1.57), sulfur dioxide (OR 1.30, 95% CI 1.03–1.63), nitrogen dioxide (OR 1.76, 95% CI 1.20–2.58), carbon monoxide (OR 1.35, 95% CI 1.01–1.80) and particulate matter less than 10 μ in diameter (OR 1.20, 95% CI 1.05–1.38). We observed a significant effect of the air pollutants in the summer months among men but not among women (e.g., OR for increase in the 5-day average of nitrogen dioxide 2.05, 95% CI 1.21–3.47, among men and 1.48, 95% CI 0.85–2.59, among women). The double-pollutant model of exposure to ozone and nitrogen dioxide in the summer months was associated with attenuation of the effects of ozone (OR 1.22, 95% CI 1.01–1.48) and nitrogen dioxide (OR 1.48, 95% CI 0.97–2.24).

Interpretation

Our findings suggest that some cases of appendicitis may be triggered by short-term exposure to air pollution. If these findings are confirmed, measures to improve air quality may help to decrease rates of appendicitis.Appendicitis was introduced into the medical vernacular in 1886.1 Since then, the prevailing theory of its pathogenesis implicated an obstruction of the appendiceal orifice by a fecalith or lymphoid hyperplasia.2 However, this notion does not completely account for variations in incidence observed by age,3,4 sex,3,4 ethnic background,3,4 family history,5 temporal–spatial clustering6 and seasonality,3,4 nor does it completely explain the trends in incidence of appendicitis in developed and developing nations.3,7,8The incidence of appendicitis increased dramatically in industrialized nations in the 19th century and in the early part of the 20th century.1 Without explanation, it decreased in the middle and latter part of the 20th century.3 The decrease coincided with legislation to improve air quality. For example, after the United States Clean Air Act was passed in 1970,9 the incidence of appendicitis decreased by 14.6% from 1970 to 1984.3 Likewise, a 36% drop in incidence was reported in the United Kingdom between 1975 and 199410 after legislation was passed in 1956 and 1968 to improve air quality and in the 1970s to control industrial sources of air pollution. Furthermore, appendicitis is less common in developing nations; however, as these countries become more industrialized, the incidence of appendicitis has been increasing.7Air pollution is known to be a risk factor for multiple conditions, to exacerbate disease states and to increase all-cause mortality.11 It has a direct effect on pulmonary diseases such as asthma11 and on nonpulmonary diseases including myocardial infarction, stroke and cancer.1113 Inflammation induced by exposure to air pollution contributes to some adverse health effects.1417 Similar to the effects of air pollution, a proinflammatory response has been associated with appendicitis.1820We conducted a case–crossover study involving a population-based cohort of patients admitted to hospital with appendicitis to determine whether short-term increases in concentrations of selected air pollutants were associated with hospital admission because of appendicitis.  相似文献   

3.
Background:Rates of imaging for low-back pain are high and are associated with increased health care costs and radiation exposure as well as potentially poorer patient outcomes. We conducted a systematic review to investigate the effectiveness of interventions aimed at reducing the use of imaging for low-back pain.Methods:We searched MEDLINE, Embase, CINAHL and the Cochrane Central Register of Controlled Trials from the earliest records to June 23, 2014. We included randomized controlled trials, controlled clinical trials and interrupted time series studies that assessed interventions designed to reduce the use of imaging in any clinical setting, including primary, emergency and specialist care. Two independent reviewers extracted data and assessed risk of bias. We used raw data on imaging rates to calculate summary statistics. Study heterogeneity prevented meta-analysis.Results:A total of 8500 records were identified through the literature search. Of the 54 potentially eligible studies reviewed in full, 7 were included in our review. Clinical decision support involving a modified referral form in a hospital setting reduced imaging by 36.8% (95% confidence interval [CI] 33.2% to 40.5%). Targeted reminders to primary care physicians of appropriate indications for imaging reduced referrals for imaging by 22.5% (95% CI 8.4% to 36.8%). Interventions that used practitioner audits and feedback, practitioner education or guideline dissemination did not significantly reduce imaging rates. Lack of power within some of the included studies resulted in lack of statistical significance despite potentially clinically important effects.Interpretation:Clinical decision support in a hospital setting and targeted reminders to primary care doctors were effective interventions in reducing the use of imaging for low-back pain. These are potentially low-cost interventions that would substantially decrease medical expenditures associated with the management of low-back pain.Current evidence-based clinical practice guidelines recommend against the routine use of imaging in patients presenting with low-back pain.13 Despite this, imaging rates remain high,4,5 which indicates poor concordance with these guidelines.6,7Unnecessary imaging for low-back pain has been associated with poorer patient outcomes, increased radiation exposure and higher health care costs.8 No short- or long-term clinical benefits have been shown with routine imaging of the low back, and the diagnostic value of incidental imaging findings remains uncertain.912 A 2008 systematic review found that imaging accounted for 7% of direct costs associated with low-back pain, which in 1998 translated to more than US$6 billion in the United States and £114 million in the United Kingdom.13 Current costs are likely to be substantially higher, with an estimated 65% increase in spine-related expenditures between 1997 and 2005.14Various interventions have been tried for reducing imaging rates among people with low-back pain. These include strategies targeted at the practitioner such as guideline dissemination,1517 education workshops,18,19 audit and feedback of imaging use,7,20,21 ongoing reminders7 and clinical decision support.2224 It is unclear which, if any, of these strategies are effective.25 We conducted a systematic review to investigate the effectiveness of interventions designed to reduce imaging rates for the management of low-back pain.  相似文献   

4.

Background:

Chronic kidney disease is an important risk factor for death and cardiovascular-related morbidity, but estimates to date of its prevalence in Canada have generally been extrapolated from the prevalence of end-stage renal disease. We used direct measures of kidney function collected from a nationally representative survey population to estimate the prevalence of chronic kidney disease among Canadian adults.

Methods:

We examined data for 3689 adult participants of cycle 1 of the Canadian Health Measures Survey (2007–2009) for the presence of chronic kidney disease. We also calculated the age-standardized prevalence of cardiovascular risk factors by chronic kidney disease group. We cross-tabulated the estimated glomerular filtration rate (eGFR) with albuminuria status.

Results:

The prevalence of chronic kidney disease during the period 2007–2009 was 12.5%, representing about 3 million Canadian adults. The estimated prevalence of stage 3–5 disease was 3.1% (0.73 million adults) and albuminuria 10.3% (2.4 million adults). The prevalence of diabetes, hypertension and hypertriglyceridemia were all significantly higher among adults with chronic kidney disease than among those without it. The prevalence of albuminuria was high, even among those whose eGFR was 90 mL/min per 1.73 m2 or greater (10.1%) and those without diabetes or hypertension (9.3%). Awareness of kidney dysfunction among adults with stage 3–5 chronic kidney disease was low (12.0%).

Interpretation:

The prevalence of kidney dysfunction was substantial in the survey population, including individuals without hypertension or diabetes, conditions most likely to prompt screening for kidney dysfunction. These findings highlight the potential for missed opportunities for early intervention and secondary prevention of chronic kidney disease.Chronic kidney disease is defined as the presence of kidney damage or reduced kidney function for more than 3 months and requires either a measured or estimated glomerular filtration rate (eGFR) of less than 60 mL/min per 1.73 m2, or the presence of abnormalities in urine sediment, renal imaging or biopsy results.1 Between 1.3 million and 2.9 million Canadians are estimated to have chronic kidney disease, based on an extrapolation of the prevalence of end-stage renal disease.2 In the United States, the 1999–2004 National Health and Nutrition Examination Survey reported a prevalence of 5.0% for stage 1 and 2 disease and 8.1% for stage 3 and 4 disease.3,4Chronic kidney disease has been identified as a risk factor for death and cardiovascular-related morbidity and is a substantial burden on the health care system.1,5 Hemodialysis costs the Canadian health care system about $60 000 per patient per year of treatment.1 The increasing prevalence of chronic kidney disease can be attributed in part to the growing elderly population and to increasing rates of diabetes and hypertension.1,6,7Albuminuria, which can result from abnormal vascular permeability, atherosclerosis or renal disease, has gained recognition as an independent risk factor for progressive renal dysfunction and adverse cardiovascular outcomes.810 In earlier stages of chronic kidney disease, albuminuria has been shown to be more predictive of renal and cardiovascular events than eGFR.4,9 This has prompted the call for a new risk stratification for cardiovascular outcomes based on both eGFR and albuminuria.11A recent review advocated screening people for chronic kidney disease if they have hypertension, diabetes, clinically evident cardiovascular disease or a family history of kidney failure or are more than 60 years old.4 The Canadian Society of Nephrology published guidelines on the management of chronic kidney disease but did not offer guidance on screening.1 The Canadian Diabetes Association recommends annual screening with the use of an albumin:creatinine ratio,12 and the Canadian Hypertension Education Program guideline recommends urinalysis as part of the initial assessment of hypertension.13 Screening for chronic kidney disease on the basis of eGFR and albuminuria is not considered to be cost-effective in the general population, among older people or among people with hypertension.14The objective of our study was to use direct measures (biomarkers) of kidney function to generate nationally representative, population-based prevalence estimates of chronic kidney disease among Canadian adults overall and in clinically relevant groups.  相似文献   

5.

Background:

Polymyalgia rheumatica is one of the most common inflammatory rheumatologic conditions in older adults. Other inflammatory rheumatologic disorders are associated with an excess risk of vascular disease. We investigated whether polymyalgia rheumatica is associated with an increased risk of vascular events.

Methods:

We used the General Practice Research Database to identify patients with a diagnosis of incident polymyalgia rheumatica between Jan. 1, 1987, and Dec. 31, 1999. Patients were matched by age, sex and practice with up to 5 patients without polymyalgia rheumatica. Patients were followed until their first vascular event (cardiovascular, cerebrovascular, peripheral vascular) or the end of available records (May 2011). All participants were free of vascular disease before the diagnosis of polymyalgia rheumatica (or matched date). We used Cox regression models to compare time to first vascular event in patients with and without polymyalgia rheumatica.

Results:

A total of 3249 patients with polymyalgia rheumatica and 12 735 patients without were included in the final sample. Over a median follow-up period of 7.8 (interquartile range 3.3–12.4) years, the rate of vascular events was higher among patients with polymyalgia rheumatica than among those without (36.1 v. 12.2 per 1000 person-years; adjusted hazard ratio 2.6, 95% confidence interval 2.4–2.9). The increased risk of a vascular event was similar for each vascular disease end point. The magnitude of risk was higher in early disease and in patients younger than 60 years at diagnosis.

Interpretation:

Patients with polymyalgia rheumatica have an increased risk of vascular events. This risk is greatest in the youngest age groups. As with other forms of inflammatory arthritis, patients with polymyalgia rheumatica should have their vascular risk factors identified and actively managed to reduce this excess risk.Inflammatory rheumatologic disorders such as rheumatoid arthritis,1,2 systemic lupus erythematosus,2,3 gout,4 psoriatic arthritis2,5 and ankylosing spondylitis2,6 are associated with an increased risk of vascular disease, especially cardiovascular disease, leading to substantial morbidity and premature death.26 Recognition of this excess vascular risk has led to management guidelines advocating screening for and management of vascular risk factors.79Polymyalgia rheumatica is one of the most common inflammatory rheumatologic conditions in older adults,10 with a lifetime risk of 2.4% for women and 1.7% for men.11 To date, evidence regarding the risk of vascular disease in patients with polymyalgia rheumatica is unclear. There are a number of biologically plausible mechanisms between polymyalgia rheumatica and vascular disease. These include the inflammatory burden of the disease,12,13 the association of the disease with giant cell arteritis (causing an inflammatory vasculopathy, which may lead to subclinical arteritis, stenosis or aneurysms),14 and the adverse effects of long-term corticosteroid treatment (e.g., diabetes, hypertension and dyslipidemia).15,16 Paradoxically, however, use of corticosteroids in patients with polymyalgia rheumatica may actually decrease vascular risk by controlling inflammation.17 A recent systematic review concluded that although some evidence exists to support an association between vascular disease and polymyalgia rheumatica,18 the existing literature presents conflicting results, with some studies reporting an excess risk of vascular disease19,20 and vascular death,21,22 and others reporting no association.2326 Most current studies are limited by poor methodologic quality and small samples, and are based on secondary care cohorts, who may have more severe disease, yet most patients with polymyalgia rheumatica receive treatment exclusively in primary care.27The General Practice Research Database (GPRD), based in the United Kingdom, is a large electronic system for primary care records. It has been used as a data source for previous studies,28 including studies on the association of inflammatory conditions with vascular disease29 and on the epidemiology of polymyalgia rheumatica in the UK.30 The aim of the current study was to examine the association between polymyalgia rheumatica and vascular disease in a primary care population.  相似文献   

6.

Background:

The gut microbiota is essential to human health throughout life, yet the acquisition and development of this microbial community during infancy remains poorly understood. Meanwhile, there is increasing concern over rising rates of cesarean delivery and insufficient exclusive breastfeeding of infants in developed countries. In this article, we characterize the gut microbiota of healthy Canadian infants and describe the influence of cesarean delivery and formula feeding.

Methods:

We included a subset of 24 term infants from the Canadian Healthy Infant Longitudinal Development (CHILD) birth cohort. Mode of delivery was obtained from medical records, and mothers were asked to report on infant diet and medication use. Fecal samples were collected at 4 months of age, and we characterized the microbiota composition using high-throughput DNA sequencing.

Results:

We observed high variability in the profiles of fecal microbiota among the infants. The profiles were generally dominated by Actinobacteria (mainly the genus Bifidobacterium) and Firmicutes (with diverse representation from numerous genera). Compared with breastfed infants, formula-fed infants had increased richness of species, with overrepresentation of Clostridium difficile. Escherichia–Shigella and Bacteroides species were underrepresented in infants born by cesarean delivery. Infants born by elective cesarean delivery had particularly low bacterial richness and diversity.

Interpretation:

These findings advance our understanding of the gut microbiota in healthy infants. They also provide new evidence for the effects of delivery mode and infant diet as determinants of this essential microbial community in early life.The human body harbours trillions of microbes, known collectively as the “human microbiome.” By far the highest density of commensal bacteria is found in the digestive tract, where resident microbes outnumber host cells by at least 10 to 1. Gut bacteria play a fundamental role in human health by promoting intestinal homeostasis, stimulating development of the immune system, providing protection against pathogens, and contributing to the processing of nutrients and harvesting of energy.1,2 The disruption of the gut microbiota has been linked to an increasing number of diseases, including inflammatory bowel disease, necrotizing enterocolitis, diabetes, obesity, cancer, allergies and asthma.1 Despite this evidence and a growing appreciation for the integral role of the gut microbiota in lifelong health, relatively little is known about the acquisition and development of this complex microbial community during infancy.3Two of the best-studied determinants of the gut microbiota during infancy are mode of delivery and exposure to breast milk.4,5 Cesarean delivery perturbs normal colonization of the infant gut by preventing exposure to maternal microbes, whereas breastfeeding promotes a “healthy” gut microbiota by providing selective metabolic substrates for beneficial bacteria.3,5 Despite recommendations from the World Health Organization,6 the rate of cesarean delivery has continued to rise in developed countries and rates of breastfeeding decrease substantially within the first few months of life.7,8 In Canada, more than 1 in 4 newborns are born by cesarean delivery, and less than 15% of infants are exclusively breastfed for the recommended duration of 6 months.9,10 In some parts of the world, elective cesarean deliveries are performed by maternal request, often because of apprehension about pain during childbirth, and sometimes for patient–physician convenience.11The potential long-term consequences of decisions regarding mode of delivery and infant diet are not to be underestimated. Infants born by cesarean delivery are at increased risk of asthma, obesity and type 1 diabetes,12 whereas breastfeeding is variably protective against these and other disorders.13 These long-term health consequences may be partially attributable to disruption of the gut microbiota.12,14Historically, the gut microbiota has been studied with the use of culture-based methodologies to examine individual organisms. However, up to 80% of intestinal microbes cannot be grown in culture.3,15 New technology using culture-independent DNA sequencing enables comprehensive detection of intestinal microbes and permits simultaneous characterization of entire microbial communities. Multinational consortia have been established to characterize the “normal” adult microbiome using these exciting new methods;16 however, these methods have been underused in infant studies. Because early colonization may have long-lasting effects on health, infant studies are vital.3,4 Among the few studies of infant gut microbiota using DNA sequencing, most were conducted in restricted populations, such as infants delivered vaginally,17 infants born by cesarean delivery who were formula-fed18 or preterm infants with necrotizing enterocolitis.19Thus, the gut microbiota is essential to human health, yet the acquisition and development of this microbial community during infancy remains poorly understood.3 In the current study, we address this gap in knowledge using new sequencing technology and detailed exposure assessments20 of healthy Canadian infants selected from a national birth cohort to provide representative, comprehensive profiles of gut microbiota according to mode of delivery and infant diet.  相似文献   

7.
Background:Otitis media with effusion is a common problem that lacks an evidence-based nonsurgical treatment option. We assessed the clinical effectiveness of treatment with a nasal balloon device in a primary care setting.Methods:We conducted an open, pragmatic randomized controlled trial set in 43 family practices in the United Kingdom. Children aged 4–11 years with a recent history of ear symptoms and otitis media with effusion in 1 or both ears, confirmed by tympanometry, were allocated to receive either autoinflation 3 times daily for 1–3 months plus usual care or usual care alone. Clearance of middle-ear fluid at 1 and 3 months was assessed by experts masked to allocation.Results:Of 320 children enrolled, those receiving autoinflation were more likely than controls to have normal tympanograms at 1 month (47.3% [62/131] v. 35.6% [47/132]; adjusted relative risk [RR] 1.36, 95% confidence interval [CI] 0.99 to 1.88) and at 3 months (49.6% [62/125] v. 38.3% [46/120]; adjusted RR 1.37, 95% CI 1.03 to 1.83; number needed to treat = 9). Autoinflation produced greater improvements in ear-related quality of life (adjusted between-group difference in change from baseline in OMQ-14 [an ear-related measure of quality of life] score −0.42, 95% CI −0.63 to −0.22). Compliance was 89% at 1 month and 80% at 3 months. Adverse events were mild, infrequent and comparable between groups.Interpretation:Autoinflation in children aged 4–11 years with otitis media with effusion is feasible in primary care and effective both in clearing effusions and improving symptoms and ear-related child and parent quality of life. Trial registration: ISRCTN, No. 55208702.Otitis media with effusion, also known as glue ear, is an accumulation of fluid in the middle ear, without symptoms or signs of an acute ear infection. It is often associated with viral infection.13 The prevalence rises to 46% in children aged 4–5 years,4 when hearing difficulty, other ear-related symptoms and broader developmental concerns often bring the condition to medical attention.3,5,6 Middle-ear fluid is associated with conductive hearing losses of about 15–45 dB HL.7 Resolution is clinically unpredictable,810 with about a third of cases showing recurrence.11 In the United Kingdom, about 200 000 children with the condition are seen annually in primary care.12,13 Research suggests some children seen in primary care are as badly affected as those seen in hospital.7,9,14,15 In the United States, there were 2.2 million diagnosed episodes in 2004, costing an estimated $4.0 billion.16 Rates of ventilation tube surgery show variability between countries,1719 with a declining trend in the UK.20Initial clinical management consists of reasonable temporizing or delay before considering surgery.13 Unfortunately, all available medical treatments for otitis media with effusion such as antibiotics, antihistamines, decongestants and intranasal steroids are ineffective and have unwanted effects, and therefore cannot be recommended.2123 Not only are antibiotics ineffective, but resistance to them poses a major threat to public health.24,25 Although surgery is effective for a carefully selected minority,13,26,27 a simple low-cost, nonsurgical treatment option could benefit a much larger group of symptomatic children, with the purpose of addressing legitimate clinical concerns without incurring excessive delays.Autoinflation using a nasal balloon device is a low-cost intervention with the potential to be used more widely in primary care, but current evidence of its effectiveness is limited to several small hospital-based trials28 that found a higher rate of tympanometric resolution of ear fluid at 1 month.2931 Evidence of feasibility and effectiveness of autoinflation to inform wider clinical use is lacking.13,28 Thus we report here the findings of a large pragmatic trial of the clinical effectiveness of nasal balloon autoinflation in a spectrum of children with clinically confirmed otitis media with effusion identified from primary care.  相似文献   

8.
Schultz AS  Finegan B  Nykiforuk CI  Kvern MA 《CMAJ》2011,183(18):E1334-E1344

Background:

Many hospitals have adopted smoke-free policies on their property. We examined the consequences of such polices at two Canadian tertiary acute-care hospitals.

Methods:

We conducted a qualitative study using ethnographic techniques over a six-month period. Participants (n = 186) shared their perspectives on and experiences with tobacco dependence and managing the use of tobacco, as well as their impressions of the smoke-free policy. We interviewed inpatients individually from eight wards (n = 82), key policy-makers (n = 9) and support staff (n = 14) and held 16 focus groups with health care providers and ward staff (n = 81). We also reviewed ward documents relating to tobacco dependence and looked at smoking-related activities on hospital property.

Results:

Noncompliance with the policy and exposure to secondhand smoke were ongoing concerns. Peoples’ impressions of the use of tobacco varied, including divergent opinions as to whether such use was a bad habit or an addiction. Treatment for tobacco dependence and the management of symptoms of withdrawal were offered inconsistently. Participants voiced concerns over patient safety and leaving the ward to smoke.

Interpretation:

Policies mandating smoke-free hospital property have important consequences beyond noncompliance, including concerns over patient safety and disruptions to care. Without adequately available and accessible support for withdrawal from tobacco, patients will continue to face personal risk when they leave hospital property to smoke.Canadian cities and provinces have passed smoking bans with the goal of reducing people’s exposure to secondhand smoke in workplaces, public spaces and on the property adjacent to public buildings.1,2 In response, Canadian health authorities and hospitals began implementing policies mandating smoke-free hospital property, with the goals of reducing the exposure of workers, patients and visitors to tobacco smoke while delivering a public health message about the dangers of smoking.25 An additional anticipated outcome was the reduced use of tobacco among patients and staff. The impetuses for adopting smoke-free policies include public support for such legislation and the potential for litigation for exposure to second-hand smoke.2,4Tobacco use is a modifiable risk factor associated with a variety of cancers, cardiovascular diseases and respiratory conditions.611 Patients in hospital who use tobacco tend to have more surgical complications and exacerbations of acute and chronic health conditions than patients who do not use tobacco.611 Any policy aimed at reducing exposure to tobacco in hospitals is well supported by evidence, as is the integration of interventions targetting tobacco dependence.12 Unfortunately, most of the nearly five million Canadians who smoke will receive suboptimal treatment,13 as the routine provision of interventions for tobacco dependence in hospital settings is not a practice norm.1416 In smoke-free hospitals, two studies suggest minimal support is offered for withdrawal, 17,18 and one reports an increased use of nicotine-replacement therapy after the implementation of the smoke-free policy.19Assessments of the effectiveness of smoke-free policies for hospital property tend to focus on noncompliance and related issues of enforcement.17,20,21 Although evidence of noncompliance and litter on hospital property2,17,20 implies ongoing exposure to tobacco smoke, half of the participating hospital sites in one study reported less exposure to tobacco smoke within hospital buildings and on the property.18 In addition, there is evidence to suggest some decline in smoking among staff.18,19,21,22We sought to determine the consequences of policies mandating smoke-free hospital property in two Canadian acute-care hospitals by eliciting lived experiences of the people faced with enacting the policies: patients and health care providers. In addition, we elicited stories from hospital support staff and administrators regarding the policies.  相似文献   

9.

Background:

Persistent postoperative pain continues to be an underrecognized complication. We examined the prevalence of and risk factors for this type of pain after cardiac surgery.

Methods:

We enrolled patients scheduled for coronary artery bypass grafting or valve replacement, or both, from Feb. 8, 2005, to Sept. 1, 2009. Validated measures were used to assess (a) preoperative anxiety and depression, tendency to catastrophize in the face of pain, health-related quality of life and presence of persistent pain; (b) pain intensity and interference in the first postoperative week; and (c) presence and intensity of persistent postoperative pain at 3, 6, 12 and 24 months after surgery. The primary outcome was the presence of persistent postoperative pain during 24 months of follow-up.

Results:

A total of 1247 patients completed the preoperative assessment. Follow-up retention rates at 3 and 24 months were 84% and 78%, respectively. The prevalence of persistent postoperative pain decreased significantly over time, from 40.1% at 3 months to 22.1% at 6 months, 16.5% at 12 months and 9.5% at 24 months; the pain was rated as moderate to severe in 3.6% at 24 months. Acute postoperative pain predicted both the presence and severity of persistent postoperative pain. The more intense the pain during the first week after surgery and the more it interfered with functioning, the more likely the patients were to report persistent postoperative pain. Pre-existing persistent pain and increased preoperative anxiety also predicted the presence of persistent postoperative pain.

Interpretation:

Persistent postoperative pain of nonanginal origin after cardiac surgery affected a substantial proportion of the study population. Future research is needed to determine whether interventions to modify certain risk factors, such as preoperative anxiety and the severity of pain before and immediately after surgery, may help to minimize or prevent persistent postoperative pain.Postoperative pain that persists beyond the normal time for tissue healing (> 3 mo) is increasingly recognized as an important complication after various types of surgery and can have serious consequences on patients’ daily living.13 Cardiac surgeries, such as coronary artery bypass grafting (CABG) and valve replacement, rank among the most frequently performed interventions worldwide.4 They aim to improve survival and quality of life by reducing symptoms, including anginal pain. However, persistent postoperative pain of nonanginal origin has been reported in 7% to 60% of patients following these surgeries.523 Such variability is common in other types of major surgery and is due mainly to differences in the definition of persistent postoperative pain, study design, data collection methods and duration of follow-up.13,24Few prospective cohort studies have examined the exact time course of persistent postoperative pain after cardiac surgery, and follow-up has always been limited to a year or less.9,14,25 Factors that put patients at risk of this type of problem are poorly understood.26 Studies have reported inconsistent results regarding the contribution of age, sex, body mass index, preoperative angina, surgical technique, grafting site, postoperative complications or level of opioid consumption after surgery.57,9,13,14,1619,2123,25,27 Only 1 study investigated the role of chronic nonanginal pain before surgery as a contributing factor;21 5 others prospectively assessed the association between persistent postoperative pain and acute pain intensity in the first postoperative week but reported conflicting results.13,14,21,22,25 All of the above studies were carried out in a single hospital and included relatively small samples. None of the studies examined the contribution of psychological factors such as levels of anxiety and depression before cardiac surgery, although these factors have been shown to influence acute or persistent postoperative pain in other types of surgery.1,24,28,29We conducted a prospective multicentre cohort study (the CARD-PAIN study) to determine the prevalence of persistent postoperative pain of nonanginal origin up to 24 months after cardiac surgery and to identify risk factors for the presence and severity of the condition.  相似文献   

10.
11.

Background

Fractures have largely been assessed by their impact on quality of life or health care costs. We conducted this study to evaluate the relation between fractures and mortality.

Methods

A total of 7753 randomly selected people (2187 men and 5566 women) aged 50 years and older from across Canada participated in a 5-year observational cohort study. Incident fractures were identified on the basis of validated self-report and were classified by type (vertebral, pelvic, forearm or wrist, rib, hip and “other”). We subdivided fracture groups by the year in which the fracture occurred during follow-up; those occurring in the fourth and fifth years were grouped together. We examined the relation between the time of the incident fracture and death.

Results

Compared with participants who had no fracture during follow-up, those who had a vertebral fracture in the second year were at increased risk of death (adjusted hazard ratio [HR] 2.7, 95% confidence interval [CI] 1.1–6.6); also at risk were those who had a hip fracture during the first year (adjusted HR 3.2, 95% CI 1.4–7.4). Among women, the risk of death was increased for those with a vertebral fracture during the first year (adjusted HR 3.7, 95% CI 1.1–12.8) or the second year of follow-up (adjusted HR 3.2, 95% CI 1.2–8.1). The risk of death was also increased among women with hip fracture during the first year of follow-up (adjusted HR 3.0, 95% CI 1.0–8.7).

Interpretation

Vertebral and hip fractures are associated with an increased risk of death. Interventions that reduce the incidence of these fractures need to be implemented to improve survival.Osteoporosis-related fractures are a major health concern, affecting a growing number of individuals worldwide. The burden of fracture has largely been assessed by the impact on health-related quality of life and health care costs.1,2 Fractures can also be associated with death. However, trials that have examined the relation between fractures and mortality have had limitations that may influence their results and the generalizability of the studies, including small samples,3,4 the examination of only 1 type of fracture,410 the inclusion of only women,8,11 the enrolment of participants from specific areas (i.e., hospitals or certain geographic regions),3,4,7,8,10,12 the nonrandom selection of participants311 and the lack of statistical adjustment for confounding factors that may influence mortality.3,57,12We evaluated the relation between incident fractures and mortality over a 5-year period in a cohort of men and women 50 years of age and older. In addition, we examined whether other characteristics of participants were risk factors for death.  相似文献   

12.

Background:

Little evidence exists on the effect of an energy-unrestricted healthy diet on metabolic syndrome. We evaluated the long-term effect of Mediterranean diets ad libitum on the incidence or reversion of metabolic syndrome.

Methods:

We performed a secondary analysis of the PREDIMED trial — a multicentre, randomized trial done between October 2003 and December 2010 that involved men and women (age 55–80 yr) at high risk for cardiovascular disease. Participants were randomly assigned to 1 of 3 dietary interventions: a Mediterranean diet supplemented with extra-virgin olive oil, a Mediterranean diet supplemented with nuts or advice on following a low-fat diet (the control group). The interventions did not include increased physical activity or weight loss as a goal. We analyzed available data from 5801 participants. We determined the effect of diet on incidence and reversion of metabolic syndrome using Cox regression analysis to calculate hazard ratios (HRs) and 95% confidence intervals (CIs).

Results:

Over 4.8 years of follow-up, metabolic syndrome developed in 960 (50.0%) of the 1919 participants who did not have the condition at baseline. The risk of developing metabolic syndrome did not differ between participants assigned to the control diet and those assigned to either of the Mediterranean diets (control v. olive oil HR 1.10, 95% CI 0.94–1.30, p = 0.231; control v. nuts HR 1.08, 95% CI 0.92–1.27, p = 0.3). Reversion occurred in 958 (28.2%) of the 3392 participants who had metabolic syndrome at baseline. Compared with the control group, participants on either Mediterranean diet were more likely to undergo reversion (control v. olive oil HR 1.35, 95% CI 1.15–1.58, p < 0.001; control v. nuts HR 1.28, 95% CI 1.08–1.51, p < 0.001). Participants in the group receiving olive oil supplementation showed significant decreases in both central obesity and high fasting glucose (p = 0.02); participants in the group supplemented with nuts showed a significant decrease in central obesity.

Interpretation:

A Mediterranean diet supplemented with either extra virgin olive oil or nuts is not associated with the onset of metabolic syndrome, but such diets are more likely to cause reversion of the condition. An energy-unrestricted Mediterranean diet may be useful in reducing the risks of central obesity and hyperglycemia in people at high risk of cardiovascular disease. Trial registration: ClinicalTrials.gov, no. ISRCTN35739639.Metabolic syndrome is a cluster of 3 or more related cardiometabolic risk factors: central obesity (determined by waist circumference), hypertension, hypertriglyceridemia, low plasma high-density lipoprotein (HDL) cholesterol levels and hyperglycemia. Having the syndrome increases a person’s risk for type 2 diabetes and cardiovascular disease.1,2 In addition, the condition is associated with increased morbidity and all-cause mortality.1,35 The worldwide prevalence of metabolic syndrome in adults approaches 25%68 and increases with age,7 especially among women,8,9 making it an important public health issue.Several studies have shown that lifestyle modifications,10 such as increased physical activity,11 adherence to a healthy diet12,13 or weight loss,1416 are associated with reversion of the metabolic syndrome and its components. However, little information exists as to whether changes in the overall dietary pattern without weight loss might also be effective in preventing and managing the condition.The Mediterranean diet is recognized as one of the healthiest dietary patterns. It has shown benefits in patients with cardiovascular disease17,18 and in the prevention and treatment of related conditions, such as diabetes,1921 hypertension22,23 and metabolic syndrome.24Several cross-sectional2529 and prospective3032 epidemiologic studies have suggested an inverse association between adherence to the Mediterranean diet and the prevalence or incidence of metabolic syndrome. Evidence from clinical trials has shown that an energy-restricted Mediterranean diet33 or adopting a Mediterranean diet after weight loss34 has a beneficial effect on metabolic syndrome. However, these studies did not determine whether the effect could be attributed to the weight loss or to the diets themselves.Seminal data from the PREDIMED (PREvención con DIeta MEDiterránea) study suggested that adherence to a Mediterranean diet supplemented with nuts reversed metabolic syndrome more so than advice to follow a low-fat diet.35 However, the report was based on data from only 1224 participants followed for 1 year. We have analyzed the data from the final PREDIMED cohort after a median follow-up of 4.8 years to determine the long-term effects of a Mediterranean diet on metabolic syndrome.  相似文献   

13.
Background:Head injuries have been associated with subsequent suicide among military personnel, but outcomes after a concussion in the community are uncertain. We assessed the long-term risk of suicide after concussions occurring on weekends or weekdays in the community.Methods:We performed a longitudinal cohort analysis of adults with diagnosis of a concussion in Ontario, Canada, from Apr. 1, 1992, to Mar. 31, 2012 (a 20-yr period), excluding severe cases that resulted in hospital admission. The primary outcome was the long-term risk of suicide after a weekend or weekday concussion.Results:We identified 235 110 patients with a concussion. Their mean age was 41 years, 52% were men, and most (86%) lived in an urban location. A total of 667 subsequent suicides occurred over a median follow-up of 9.3 years, equivalent to 31 deaths per 100 000 patients annually or 3 times the population norm. Weekend concussions were associated with a one-third further increased risk of suicide compared with weekday concussions (relative risk 1.36, 95% confidence interval 1.14–1.64). The increased risk applied regardless of patients’ demographic characteristics, was independent of past psychiatric conditions, became accentuated with time and exceeded the risk among military personnel. Half of these patients had visited a physician in the last week of life.Interpretation:Adults with a diagnosis of concussion had an increased long-term risk of suicide, particularly after concussions on weekends. Greater attention to the long-term care of patients after a concussion in the community might save lives because deaths from suicide can be prevented.Suicide is a leading cause of death in both military and community settings.1 During 2010, 3951 suicide deaths occurred in Canada2 and 38 364 in the United States.3 The frequency of attempted suicide is about 25 times higher, and the financial costs in the US equate to about US$40 billion annually.4 The losses from suicide in Canada are comparable to those in other countries when adjusted for population size.5 Suicide deaths can be devastating to surviving family and friends.6 Suicide in the community is almost always related to a psychiatric illness (e.g., depression, substance abuse), whereas suicide in the military is sometimes linked to a concussion from combat injury.710Concussion is the most common brain injury in young adults and is defined as a transient disturbance of mental function caused by acute trauma.11 About 4 million concussion cases occur in the US each year, equivalent to a rate of about 1 per 1000 adults annually;12 direct Canadian data are not available. The majority lead to self-limited symptoms, and only a small proportion have a protracted course.13 However, the frequency of depression after concussion can be high,14,15 and traumatic brain injury in the military has been associated with subsequent suicide.8,16 Severe head trauma resulting in admission to hospital has also been associated with an increased risk of suicide, whereas mild concussion in ambulatory adults is an uncertain risk factor.1720The aim of this study was to determine whether concussion was associated with an increased long-term risk of suicide and, if so, whether the day of the concussion (weekend v. weekday) could be used to identify patients at further increased risk. The severity and mechanism of injury may differ by day of the week because recreational injuries are more common on weekends and occupational injuries are more common on weekdays.2127 The risk of a second concussion, use of protective safeguards, propensity to seek care, subsequent oversight, sense of responsibility and other nuances may also differ for concussions acquired from weekend recreation rather than weekday work.2831 Medical care on weekends may also be limited because of shortfalls in staffing.32  相似文献   

14.

Background:

Brief interventions delivered by family physicians to address excessive alcohol use among adult patients are effective. We conducted a study to determine whether such an intervention would be similarly effective in reducing binge drinking and excessive cannabis use among young people.

Methods:

We conducted a cluster randomized controlled trial involving 33 family physicians in Switzerland. Physicians in the intervention group received training in delivering a brief intervention to young people during the consultation in addition to usual care. Physicians in the control group delivered usual care only. Consecutive patients aged 15–24 years were recruited from each practice and, before the consultation, completed a confidential questionnaire about their general health and substance use. Patients were followed up at 3, 6 and 12 months after the consultation. The primary outcome measure was self-reported excessive substance use (≥ 1 episode of binge drinking, or ≥ 1 joint of cannabis per week, or both) in the past 30 days.

Results:

Of the 33 participating physicians, 17 were randomly allocated to the intervention group and 16 to the control group. Of the 594 participating patients, 279 (47.0%) identified themselves as binge drinkers or excessive cannabis users, or both, at baseline. Excessive substance use did not differ significantly between patients whose physicians were in the intervention group and those whose physicians were in the control group at any of the follow-up points (odds ratio [OR] and 95% confidence interval [CI] at 3 months: 0.9 [0.6–1.4]; at 6 mo: 1.0 [0.6–1.6]; and at 12 mo: 1.1 [0.7–1.8]). The differences between groups were also nonsignificant after we re stricted the analysis to patients who reported excessive substance use at baseline (OR 1.6, 95% CI 0.9–2.8, at 3 mo; OR 1.7, 95% CI 0.9–3.2, at 6 mo; and OR 1.9, 95% CI 0.9–4.0, at 12 mo).

Interpretation:

Training family physicians to use a brief intervention to address excessive substance use among young people was not effective in reducing binge drinking and excessive cannabis use in this patient population. Trial registration: Australian New Zealand Clinical Trials Registry, no. ACTRN12608000432314.Most health-compromising behaviours begin in adolescence.1 Interventions to address these behaviours early are likely to bring long-lasting benefits.2 Harmful use of alcohol is a leading factor associated with premature death and disability worldwide, with a disproportionally high impact on young people (aged 10–24 yr).3,4 Similarly, early cannabis use can have adverse consequences that extend into adulthood.58In adolescence and early adulthood, binge drinking on at least a monthly basis is associated with an increased risk of adverse outcomes later in life.912 Although any cannabis use is potentially harmful, weekly use represents a threshold in adolescence related to an increased risk of cannabis (and tobacco) dependence in adulthood.13 Binge drinking affects 30%–50% and excessive cannabis use about 10% of the adolescent and young adult population in Europe and the United States.10,14,15Reducing substance-related harm involves multisectoral approaches, including promotion of healthy child and adolescent development, regulatory policies and early treatment interventions.16 Family physicians can add to the public health messages by personalizing their content within brief interventions.17,18 There is evidence that brief interventions can encourage young people to reduce substance use, yet most studies have been conducted in community settings (mainly educational), emergency services or specialized addiction clinics.1,16 Studies aimed at adult populations have shown favourable effects of brief alcohol interventions, and to some extent brief cannabis interventions, in primary care.1922 These interventions have been recommended for adolescent populations.4,5,16 Yet young people have different modes of substance use and communication styles that may limit the extent to which evidence from adult studies can apply to them.Recently, a systematic review of brief interventions to reduce alcohol use in adolescents identified only 1 randomized controlled trial in primary care.23 The tested intervention, not provided by family physicians but involving audio self-assessment, was ineffective in reducing alcohol use in exposed adolescents.24 Sanci and colleagues showed that training family physicians to address health-risk behaviours among adolescents was effective in improving provider performance, but the extent to which this translates into improved outcomes remains unknown.25,26 Two nonrandomized studies suggested screening for substance use and brief advice by family physicians could favour reduced alcohol and cannabis use among adolescents,27,28 but evidence from randomized trials is lacking.29We conducted the PRISM-Ado (Primary care Intervention Addressing Substance Misuse in Adolescents) trial, a cluster randomized controlled trial of the effectiveness of training family physicians to deliver a brief intervention to address binge drinking and excessive cannabis use among young people.  相似文献   

15.

Background:

Recent warnings from Health Canada regarding codeine for children have led to increased use of nonsteroidal anti-inflammatory drugs and morphine for common injuries such as fractures. Our objective was to determine whether morphine administered orally has superior efficacy to ibuprofen in fracture-related pain.

Methods:

We used a parallel group, randomized, blinded superiority design. Children who presented to the emergency department with an uncomplicated extremity fracture were randomly assigned to receive either morphine (0.5 mg/kg orally) or ibuprofen (10 mg/kg) for 24 hours after discharge. Our primary outcome was the change in pain score using the Faces Pain Scale — Revised (FPS-R). Participants were asked to record pain scores immediately before and 30 minutes after receiving each dose.

Results:

We analyzed data from 66 participants in the morphine group and 68 participants in the ibuprofen group. For both morphine and ibuprofen, we found a reduction in pain scores (mean pre–post difference ± standard deviation for dose 1: morphine 1.5 ± 1.2, ibuprofen 1.3 ± 1.0, between-group difference [δ] 0.2 [95% confidence interval (CI) −0.2 to 0.6]; dose 2: morphine 1.3 ± 1.3, ibuprofen 1.3 ± 0.9, δ 0 [95% CI −0.4 to 0.4]; dose 3: morphine 1.3 ± 1.4, ibuprofen 1.4 ± 1.1, δ −0.1 [95% CI −0.7 to 0.4]; and dose 4: morphine 1.5 ± 1.4, ibuprofen 1.1 ± 1.2, δ 0.4 [95% CI −0.2 to 1.1]). We found no significant differences in the change in pain scores between morphine and ibuprofen between groups at any of the 4 time points (p = 0.6). Participants in the morphine group had significantly more adverse effects than those in the ibuprofen group (56.1% v. 30.9%, p < 0.01).

Interpretation:

We found no significant difference in analgesic efficacy between orally administered morphine and ibuprofen. However, morphine was associated with a significantly greater number of adverse effects. Our results suggest that ibuprofen remains safe and effective for outpatient pain management in children with uncomplicated fractures. Trial registration: ClinicalTrials.gov, no. NCT01690780.There is ample evidence that analgesia is underused,1 underprescribed,2 delayed in its administration2 and suboptimally dosed 3 in clinical settings. Children are particularly susceptible to suboptimal pain management4 and are less likely to receive opioid analgesia.5 Untreated pain in childhood has been reported to lead to short-term problems such as slower healing6 and to long-term issues such as anxiety, needle phobia,7 hyperesthesia8 and fear of medical care.9 The American Academy of Pediatrics has reaffirmed its advocacy for the appropriate use of analgesia for children with acute pain.10Fractures constitute between 10% and 25% of all injuries.11 The most severe pain after an injury occurs within the first 48 hours, with more than 80% of children showing compromise in at least 1 functional area.12 Low rates of analgesia have been reported after discharge from hospital.13 A recently improved understanding of the pharmacogenomics of codeine has raised significant concerns about its safety,14,15 and has led to a Food and Drug Administration boxed warning16 and a Health Canada advisory17 against its use. Although ibuprofen has been cited as the most common agent used by caregivers to treat musculoskeletal pain,12,13 there are concerns that its use as monotherapy may lead to inadequate pain management.6,18 Evidence suggests that orally administered morphine13 and other opioids are increasingly being prescribed.19 However, evidence for the oral administration of morphine in acute pain management is limited.20,21 Thus, additional studies are needed to address this gap in knowledge and provide a scientific basis for outpatient analgesic choices in children. Our objective was to assess if orally administered morphine is superior to ibuprofen in relieving pain in children with nonoperative fractures.  相似文献   

16.

Background:

Use of the serum creatinine concentration, the most widely used marker of kidney function, has been associated with under-reporting of chronic kidney disease and late referral to nephrologists, especially among women and elderly people. To improve appropriateness of referrals, automatic reporting of the estimated glomerular filtration rate (eGFR) by laboratories was introduced in the province of Ontario, Canada, in March 2006. We hypothesized that such reporting, along with an ad hoc educational component for primary care physicians, would increase the number of appropriate referrals.

Methods:

We conducted a population-based before–after study with interrupted time-series analysis at a tertiary care centre. All referrals to nephrologists received at the centre during the year before and the year after automatic reporting of the eGFR was introduced were eligible for inclusion. We used regression analysis with autoregressive errors to evaluate whether such reporting by laboratories, along with ad hoc educational activities for primary care physicians, had an impact on the number and appropriateness of referrals to nephrologists.

Results:

A total of 2672 patients were included in the study. In the year after automatic reporting began, the number of referrals from primary care physicians increased by 80.6% (95% confidence interval [CI] 74.8% to 86.9%). The number of appropriate referrals increased by 43.2% (95% CI 38.0% to 48.2%). There was no significant change in the proportion of appropriate referrals between the two periods (−2.8%, 95% CI −26.4% to 43.4%). The proportion of elderly and female patients who were referred increased after reporting was introduced.

Interpretation:

The total number of referrals increased after automatic reporting of the eGFR began, especially among women and elderly people. The number of appropriate referrals also increased, but the proportion of appropriate referrals did not change significantly. Future research should be directed to understanding the reasons for inappropriate referral and to develop novel interventions for improving the referral process.Until recently, the serum creatinine concentration was used universally as an index of the glomerular filtration rate (GFR) to identify and monitor chronic kidney disease.1 The serum creatinine concentration depends on several factors, the most important being muscle mass.1 Women as compared with men, and elderly people as compared with young adults, tend to have lower muscle mass for the same degree of kidney function and thus have lower serum creatinine concentrations.2,3 Consequently, the use of the serum creatinine concentration is associated with underrecognition of chronic kidney disease, delayed workup for chronic kidney disease and late referral to nephrologists, particularly among women and elderly people. Late referral has been associated with increased mortality among patients receiving dialysis.311In 1999, the Modification of Diet in Renal Disease formula was introduced to calculate the estimated GFR (eGFR).12,13 This formula uses the patient’s serum creatinine concentration, age, sex and race (whether the patient is black or not). All of these variables are easily available to laboratories except race. Laboratories report the eGFR for non-black people, with advice to practitioners to multiply the result by 1.21 if their patient is black. Given that reporting of the eGFR markedly improves detection of chronic kidney disease,14,15 several national organizations recommended that laboratories automatically calculate and report the eGFR when the serum creatinine concentration is requested.1619 These organizations also provided guidelines on appropriate referral to nephrology based on the value.Although several studies have reported increases in referrals to nephrologists after automatic reporting of the eGFR was introduced,2026 there is limited evidence on the impact that such reporting has had on the appropriateness of referrals. An increase in the number of inappropriate referrals would affect health care delivery, diverting scarce resources to the evaluation of relatively mild kidney disease. It also would likely increase wait times for all nephrology referrals and have a financial impact on the system because specialist care is more costly than primary care.We conducted a study to evaluate whether the introduction of automatic reporting of the eGFR by laboratories, along with ad hoc educational activities for primary care physicians, had an impact on the number and appropriateness of referrals to nephrologists.  相似文献   

17.

Background

Vaccination against herpes zoster is being considered in many countries. We conducted a multicentre prospective study to describe the impact of herpes zoster and postherpetic neuralgia on health-related quality of life.

Methods

From October 2005 to July 2006, 261 outpatients aged 50 years or older with herpes zoster were recruited from the clinical practices of 83 physicians within 14 days after rash onset. The Zoster Brief Pain Inventory was used to measure severity of pain and interference with activities of daily living because of pain. The EuroQol EQ-5D assessment tool was used to measure quality of life. These outcomes were assessed at recruitment and on days 7, 14, 21, 30, 60, 90, 120, 150 and 180 following recruitment.

Results

Acute herpes zoster interfered in all health domains, especially sleep (64% of participants), enjoyment of life (58%) and general activities (53%). The median duration of pain was 32.5 days. The median duration of interference with activities of daily living because of pain varied between 27 and 30 days. Overall, 24% of the participants had postherpetic neuralgia (pain for more than 90 days after rash onset). Anxiety and depression, enjoyment of life, mood and sleep were most frequently affected during the postherpetic neuralgia period. The mean EQ-5D score was 0.59 at enrolment and remained at 0.67 at all follow-up points among participants who reported clinically significant pain.

Interpretation

These data support the need for preventive strategies and additional early intervention to reduce the burden of herpes zoster and postherpetic neuralgia.Herpes zoster, which is characterized by dermatomal pain and vesicular rash,1,2 results from reactivation of the varicella-zoster virus.3,4 The average lifetime risk of herpes zoster in developed countries is estimated to be about 30%57 and increases with increasing life expectancy. The most common complication of herpes zoster, and one of the most challenging to treat, is postherpetic neuralgia, a painful condition often defined as pain persisting for more than 90 days after rash onset.8 According to this definition, postherpetic neuralgia is estimated to occur in 8%–27% of people with herpes zoster overall.914 The risk of postherpetic neuralgia increases markedly with age.15The Shingles Prevention Study, a randomized double-blind placebo-controlled trial, showed that a live-attenuated varicella-zoster virus vaccine was safe and effective in preventing herpes zoster and postherpetic neuralgia among people 60 years of age and older.13 Given these promising results, policy-makers and clinicians are being asked to make recommendations regarding the use and funding of the herpes zoster vaccine. To do this, evidence on the burden of herpes zoster from the patient’s perspective is required. The only data available on the impact of herpes zoster on health-related quality of life comes from two short-term studies.16,17 Clinical reports and cross-sectional surveys1820 have also suggested that postherpetic neuralgia can profoundly impair quality of life. However, no study followed a cohort of patients with newly diagnosed herpes zoster for a sufficient period to assess postherpetic neuralgia and describe the associated impact on quality of life.We undertook a multicentre prospective study to describe the impact of herpes zoster and postherpetic neuralgia on health-related quality of life.  相似文献   

18.
We report evidence that adenylate kinase (AK) from Escherichia coli can be activated by the direct binding of a magnesium ion to the enzyme, in addition to ATP-complexed Mg2+. By systematically varying the concentrations of AMP, ATP, and magnesium in kinetic experiments, we found that the apparent substrate inhibition of AK, formerly attributed to AMP, was suppressed at low magnesium concentrations and enhanced at high magnesium concentrations. This previously unreported magnesium dependence can be accounted for by a modified random bi-bi model in which Mg2+ can bind to AK directly prior to AMP binding. A new kinetic model is proposed to replace the conventional random bi-bi mechanism with substrate inhibition and is able to describe the kinetic data over a physiologically relevant range of magnesium concentrations. According to this model, the magnesium-activated AK exhibits a 23- ± 3-fold increase in its forward reaction rate compared with the unactivated form. The findings imply that Mg2+ could be an important affecter in the energy signaling network in cells.Adenylate kinase (AK)2 is a ∼24-kDa enzyme involved in cellular metabolism that catalyzes the reversible phosphoryl transfer reaction (1) as in Reaction 1. Mg2+ATP+AMPreverseforwardMg2+ADP+ADPREACTION 1It is recognized to play an important role in cellular energetic signaling networks (2, 3). A deficiency in human AK function may lead to such illness as hemolytic anemia (48) and coronary artery disease (9); the latter is thought to be caused by a disruption of the AMP signaling network of AK (10). The ubiquity of AK makes it an ideal candidate for investigating evolutionary divergence and natural adaptation at a molecular level (11, 12). Indeed, extensive structure-function studies have been carried out for AK (reviewed in Ref. 13). Both structural and biophysical studies have suggested that large-amplitude conformational changes in AK are important for catalysis (1419). More recently, the functional roles of conformational dynamics have been investigated using NMR (2022), computer simulations (2327), and single-molecule spectroscopy (28). Given the critical role of AK in regulating cellular energy networks and its use as a model system for understanding the functional roles of conformational changes in enzymes, it is imperative that the enzymatic mechanism of AK be thoroughly characterized and understood.The enzymatic reaction of adenylate kinase has been shown to follow a random bi-bi mechanism using isotope exchange experiments (29). Isoforms of adenylate kinases characterized from a wide range of species have a high degree of sequence, structure, and functional conservation. Although all AKs appear to follow the same random bi-bi mechanistic framework (15, 2933), a detailed kinetic analysis reveals interesting variations among different isoforms. For example, one of the most puzzling discrepancies is the change in turnover rates with increasing AMP concentration between rabbit muscle AK and Escherichia coli AK. Although the reactivity of rabbit muscle AK is slightly inhibited at higher AMP concentrations (29, 32), E. coli AK exhibits its maximum turnover rate around 0.2 mm AMP followed by a steep drop, which plateaus at still higher AMP concentrations (3335). This observation has been traditionally attributed to greater substrate inhibition by AMP in E. coli AK compared with the rabbit isoform; yet, the issue of whether the reaction involves competitive or non-competitive inhibition by AMP at the ATP binding site remains unresolved (15, 33, 3537).Here, we report a comprehensive kinetic study of the forward reaction of AK, exploring concentrations of nucleotides and Mg2+ that are comparable to those inside E. coli cells, [Mg2+] ∼ 1–2 mm (38) and [ATP] up to 3 mm (39). We discovered a previously unreported phenomenon: an increase in the forward reaction rate of AK with increasing Mg2+ concentrations, where the stoichiometry of Mg2+ to the enzyme is greater than one. The new observation leads us to propose an Mg2+-activation mechanism augmenting the commonly accepted random bi-bi model for E. coli AK. Our model can fully explain AK’s observed kinetic behavior involving AMP, ATP, and Mg2+ as substrates, out-performing the previous model requiring AMP inhibition. The new Mg2+-activation model also explains the discrepancies in AMP inhibition behavior and currently available E. coli AK kinetic data. Given the central role of AK in energy regulation and our new experimental evidence, it is possible that Mg2+ and its regulation may participate in respiratory network through AK (4042), an exciting future research direction.  相似文献   

19.

Background

Patients exposed to low-dose ionizing radiation from cardiac imaging and therapeutic procedures after acute myocardial infarction may be at increased risk of cancer.

Methods

Using an administrative database, we selected a cohort of patients who had an acute myocardial infarction between April 1996 and March 2006 and no history of cancer. We documented all cardiac imaging and therapeutic procedures involving low-dose ionizing radiation. The primary outcome was risk of cancer. Statistical analyses were performed using a time-dependent Cox model adjusted for age, sex and exposure to low-dose ionizing radiation from noncardiac imaging to account for work-up of cancer.

Results

Of the 82 861 patients included in the cohort, 77% underwent at least one cardiac imaging or therapeutic procedure involving low-dose ionizing radiation in the first year after acute myocardial infarction. The cumulative exposure to radiation from cardiac procedures was 5.3 milliSieverts (mSv) per patient-year, of which 84% occurred during the first year after acute myocardial infarction. A total of 12 020 incident cancers were diagnosed during the follow-up period. There was a dose-dependent relation between exposure to radiation from cardiac procedures and subsequent risk of cancer. For every 10 mSv of low-dose ionizing radiation, there was a 3% increase in the risk of age- and sex-adjusted cancer over a mean follow-up period of five years (hazard ratio 1.003 per milliSievert, 95% confidence interval 1.002–1.004).

Interpretation

Exposure to low-dose ionizing radiation from cardiac imaging and therapeutic procedures after acute myocardial infarction is associated with an increased risk of cancer.Studies involving atomic bomb survivors have documented an increased incidence of malignant neoplasm related to the radiation exposure.14 Survivors who were farther from the epicentre of the blast had a lower incidence of cancer, whereas those who were closer had a higher incidence.5 Similar risk estimates have been reported among workers in nuclear plants.6 However, little is known about the relation between exposure to low-dose ionizing radiation from medical procedures and the risk of cancer.In the past six decades since the atomic bomb explosions, most individuals worldwide have had minimal exposure to ionizing radiation. However, the recent increase in the use of medical imaging and therapeutic procedures involving low-dose ionizing radiation has led to a growing concern that individual patients may be at increased risk of cancer.712 Whereas strict regulatory control is placed on occupational exposure at work sites, no such control exists among patients who are exposed to such radiation.1316It is not only the frequency of these procedures that is increasing. Newer types of imaging procedures are using higher doses of low-dose ionizing radiation than those used with more traditional procedures.8,11 Among patients being evaluated for coronary artery disease, for example, coronary computed tomography is increasingly being used. This test may be used in addition to other tests such as nuclear scans, coronary angiography and percutaneous coronary intervention, each of which exposes the patient to low-dose ionizing radiation.12,1721 Imaging procedures provide information that can be used to predict the prognosis of patients with coronary artery disease. Since such predictions do not necessarily translate into better clinical outcomes,8,12 the prognostic value obtained from imaging procedures using low-dose ionizing radiation needs to be balanced against the potential for risk.Authors of several studies have estimated that the risk of cancer is not negligible among patients exposed to low-dose ionizing radiation.2227 To our knowledge, none of these studies directly linked cumulative exposure and cancer risk. We examined a cohort of patients who had acute myocardial infarction and measured the association between low-dose ionizing radiation from cardiac imaging and therapeutic procedures and the risk of cancer.  相似文献   

20.
Chun-Yuh Yang 《CMAJ》2010,182(6):569-572

Background

There are limited empirical data to support the theory of a protective effect of parenthood against suicide, as proposed by Durkheim in 1897. I conducted this study to examine whether there is an association between parity and risk of death from suicide among women.

Methods

The study cohort consisted of 1 292 462 women in Taiwan who had a first live birth between Jan. 1, 1978, and Dec. 31, 1987. The women were followed up from the date of their first birth to Dec. 31, 2007. Their vital status was ascertained by means of linking records with data from a computerized mortality database. Cox proportional hazard regression models were used to estimate hazard ratios of death from suicide associated with parity.

Results

There were 2252 deaths from suicide during 32 464 187 person-years of follow-up. Suicide-related mortality was 6.94 per 100 000 person-years. After adjustment for age at first birth, marital status, years of schooling and place of delivery, the adjusted hazard ratio was 0.61 (95% confidence interval [CI] 0.54–0.68) among women with two live births and 0.40 (95% CI 0.35–0.45) among those with three or more live births, compared with women who had one live birth. I observed a significantly decreasing trend in adjusted hazard ratios of suicide with increasing parity.

Interpretation

This study provides evidence to support Durkheim’s hypothesis that parenthood confers a protective effect against suicide.Childbearing is considered to have long-term effects on women’s health.1 However, little is known about the relation between parity and mortality among women except for cancers of the reproductive organs.2In his book on suicide published in 1897, Durkheim concluded that the rate of death from suicide was lower among married women than among unmarried women because of the effect of parenthood and not marriage per se.3 Three studies since then have explored Durkheim’s hypothesis. In the first, published almost 100 years later, Hoyer and Lund conducted a prospective study in Norway involving 989 949 married women aged 25 years or older who were followed up for 15 years.4 They reported a negative association between suicide-related mortality and number of children. In a nested case–control study in Denmark involving 6500 women who committed suicide between Jan. 1, 1981, and Dec. 31, 1997, and 130 000 matched control subjects, Qin and Mortensen found a significantly decreased risk of suicide with increasing number of children.5 In the third study, 12 055 pregnant women in Finland were followed up from delivery in 1966 until 2001; the authors found a decreasing trend in suicide-related mortality with increasing parity.1One reason for the limited empirical evidence exploring Durkheim’s hypothesis may have to do with sample size and study design.4 Only studies involving representative suicides from the general population could make it possible to achieve sufficient power to detect the effect of parity on rare events such as suicide.1,4,5 Even in the prospective study involving 989 949 women followed for 15 years, only 11 deaths from suicide occurred among women with six or more children.4In Taiwan, suicide is the eighth leading cause of death among men and the ninth among women. The age-adjusted rate of death from suicide was 19.7 per 100 000 among men and 9.7 among women in 2007.6 Suicide rates in Western countries have been generally lower than those in Asian countries.7 A consistent increase in the suicide rate since 1999 has been found in Taiwan.6 However, most Western countries have had stable or slightly decreasing rates during the 1990s.8,9 The male:female ratio of suicide is frequently greater than 3:1 in Western countries,7 whereas it is 2:1 in Taiwan.10 High suicide rates among Chinese women have been well documented.11 One explanation is that Chinese women do not benefit from marriage as much as their male counterparts.12 The sex difference in suicide rates is largely driven by a high rate of suicide among women in Chinese societies.11 In many Western countries, the trend over the past several years has been in the opposite direction: rates among women have been stable or decreasing, whereas rates among men have been increasing.12 Furthermore, in an epidemiologic study of suicides in Chinese communities, the prevalence of mental illness among people committing suicide was much lower in those communities than in Western societies.13Because the previous studies that related parity and suicide-related mortality were carried out in economically developed countries and because different cultural settings might influence suicide patterns,3 I undertook the present study in Taiwan, using a cohort of women who had a first and singleton live birth between Jan. 1, 1978, and Dec. 31, 1987, to explore further Durkheim’s hypothesis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号