首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Background

Inuit have not experienced an epidemic in type 2 diabetes mellitus, and it has been speculated that they may be protected from obesity’s metabolic consequences. We conducted a population-based screening for diabetes among Inuit in the Canadian Arctic and evaluated the association of visceral adiposity with diabetes.

Methods

A total of 36 communities participated in the International Polar Year Inuit Health Survey. Of the 2796 Inuit households approached, 1901 (68%) participated, with 2595 participants. Households were randomly selected, and adult residents were invited to participate. Assessments included anthropometry and fasting plasma lipids and glucose, and, because of survey logistics, only 32% of participants underwent a 75 g oral glucose tolerance test. We calculated weighted prevalence estimates of metabolic risk factors for all participants.

Results

Participants’ mean age was 43.3 years; 35% were obese, 43.8% had an at-risk waist, and 25% had an elevated triglyceride level. Diabetes was identified in 12.2% of participants aged 50 years and older and in 1.9% of those younger than 50 years. A hypertriglyceridemic-waist phenotype was a strong predictor of diabetes (odds ratio [OR] 8.6, 95% confidence interval [CI] 2.1–34.6) in analyses adjusted for age, sex, region, family history of diabetes, education and use of lipid-lowering medications.

Interpretation

Metabolic risk factors were prevalent among Inuit. Our results suggest that Inuit are not protected from the metabolic consequences of obesity, and that their rate of diabetes prevalence is now comparable to that observed in the general Canadian population. Assessment of waist circumference and fasting triglyceride levels could represent an efficient means for identifying Inuit at high risk for diabetes.Indigenous people across the Arctic continue to undergo cultural transitions that affect all dimensions of life, with implications for emerging obesity and changes in patterns of disease burden.13 A high prevalence of obesity among Canadian Inuit has been noted,3,4 and yet studies have suggested that the metabolic consequences of obesity may not be as severe among Inuit as they are in predominantly Caucasian or First Nations populations.46 Conversely, the prevalence of type 2 diabetes mellitus, which was noted to be rare among Inuit in early studies,7,8 now matches or exceeds that of predominately Caucasian comparison populations in Alaska and Greenland.911 However, in Canada, available reports suggest that diabetes prevalence among Inuit remains below that of the general Canadian population.3,12Given the rapid changes in the Arctic and a lack of comprehensive and uniform screening assessments, we used the International Polar Year Inuit Health Survey for Adults 2007–2008 to assess the current prevalence of glycemia and the toll of age and adiposity on glycemia in this population. However, adiposity is heterogeneous, and simple measures of body mass index (BMI) in kg/m2 and waist circumference do not measure visceral adiposity (or intra-abdominal adipose tissue), which is considered more deleterious than subcutaneous fat.13 Therefore, we evaluated the “hypertriglyceridemic-waist” phenotype (i.e., the presence of both an at-risk waist circumference and an elevated triglyceride level) as a proxy indicator of visceral fat.1315  相似文献   

2.

Background

The pathogenesis of appendicitis is unclear. We evaluated whether exposure to air pollution was associated with an increased incidence of appendicitis.

Methods

We identified 5191 adults who had been admitted to hospital with appendicitis between Apr. 1, 1999, and Dec. 31, 2006. The air pollutants studied were ozone, nitrogen dioxide, sulfur dioxide, carbon monoxide, and suspended particulate matter of less than 10 μ and less than 2.5 μ in diameter. We estimated the odds of appendicitis relative to short-term increases in concentrations of selected pollutants, alone and in combination, after controlling for temperature and relative humidity as well as the effects of age, sex and season.

Results

An increase in the interquartile range of the 5-day average of ozone was associated with appendicitis (odds ratio [OR] 1.14, 95% confidence interval [CI] 1.03–1.25). In summer (July–August), the effects were most pronounced for ozone (OR 1.32, 95% CI 1.10–1.57), sulfur dioxide (OR 1.30, 95% CI 1.03–1.63), nitrogen dioxide (OR 1.76, 95% CI 1.20–2.58), carbon monoxide (OR 1.35, 95% CI 1.01–1.80) and particulate matter less than 10 μ in diameter (OR 1.20, 95% CI 1.05–1.38). We observed a significant effect of the air pollutants in the summer months among men but not among women (e.g., OR for increase in the 5-day average of nitrogen dioxide 2.05, 95% CI 1.21–3.47, among men and 1.48, 95% CI 0.85–2.59, among women). The double-pollutant model of exposure to ozone and nitrogen dioxide in the summer months was associated with attenuation of the effects of ozone (OR 1.22, 95% CI 1.01–1.48) and nitrogen dioxide (OR 1.48, 95% CI 0.97–2.24).

Interpretation

Our findings suggest that some cases of appendicitis may be triggered by short-term exposure to air pollution. If these findings are confirmed, measures to improve air quality may help to decrease rates of appendicitis.Appendicitis was introduced into the medical vernacular in 1886.1 Since then, the prevailing theory of its pathogenesis implicated an obstruction of the appendiceal orifice by a fecalith or lymphoid hyperplasia.2 However, this notion does not completely account for variations in incidence observed by age,3,4 sex,3,4 ethnic background,3,4 family history,5 temporal–spatial clustering6 and seasonality,3,4 nor does it completely explain the trends in incidence of appendicitis in developed and developing nations.3,7,8The incidence of appendicitis increased dramatically in industrialized nations in the 19th century and in the early part of the 20th century.1 Without explanation, it decreased in the middle and latter part of the 20th century.3 The decrease coincided with legislation to improve air quality. For example, after the United States Clean Air Act was passed in 1970,9 the incidence of appendicitis decreased by 14.6% from 1970 to 1984.3 Likewise, a 36% drop in incidence was reported in the United Kingdom between 1975 and 199410 after legislation was passed in 1956 and 1968 to improve air quality and in the 1970s to control industrial sources of air pollution. Furthermore, appendicitis is less common in developing nations; however, as these countries become more industrialized, the incidence of appendicitis has been increasing.7Air pollution is known to be a risk factor for multiple conditions, to exacerbate disease states and to increase all-cause mortality.11 It has a direct effect on pulmonary diseases such as asthma11 and on nonpulmonary diseases including myocardial infarction, stroke and cancer.1113 Inflammation induced by exposure to air pollution contributes to some adverse health effects.1417 Similar to the effects of air pollution, a proinflammatory response has been associated with appendicitis.1820We conducted a case–crossover study involving a population-based cohort of patients admitted to hospital with appendicitis to determine whether short-term increases in concentrations of selected air pollutants were associated with hospital admission because of appendicitis.  相似文献   

3.

Background:

Little evidence exists on the effect of an energy-unrestricted healthy diet on metabolic syndrome. We evaluated the long-term effect of Mediterranean diets ad libitum on the incidence or reversion of metabolic syndrome.

Methods:

We performed a secondary analysis of the PREDIMED trial — a multicentre, randomized trial done between October 2003 and December 2010 that involved men and women (age 55–80 yr) at high risk for cardiovascular disease. Participants were randomly assigned to 1 of 3 dietary interventions: a Mediterranean diet supplemented with extra-virgin olive oil, a Mediterranean diet supplemented with nuts or advice on following a low-fat diet (the control group). The interventions did not include increased physical activity or weight loss as a goal. We analyzed available data from 5801 participants. We determined the effect of diet on incidence and reversion of metabolic syndrome using Cox regression analysis to calculate hazard ratios (HRs) and 95% confidence intervals (CIs).

Results:

Over 4.8 years of follow-up, metabolic syndrome developed in 960 (50.0%) of the 1919 participants who did not have the condition at baseline. The risk of developing metabolic syndrome did not differ between participants assigned to the control diet and those assigned to either of the Mediterranean diets (control v. olive oil HR 1.10, 95% CI 0.94–1.30, p = 0.231; control v. nuts HR 1.08, 95% CI 0.92–1.27, p = 0.3). Reversion occurred in 958 (28.2%) of the 3392 participants who had metabolic syndrome at baseline. Compared with the control group, participants on either Mediterranean diet were more likely to undergo reversion (control v. olive oil HR 1.35, 95% CI 1.15–1.58, p < 0.001; control v. nuts HR 1.28, 95% CI 1.08–1.51, p < 0.001). Participants in the group receiving olive oil supplementation showed significant decreases in both central obesity and high fasting glucose (p = 0.02); participants in the group supplemented with nuts showed a significant decrease in central obesity.

Interpretation:

A Mediterranean diet supplemented with either extra virgin olive oil or nuts is not associated with the onset of metabolic syndrome, but such diets are more likely to cause reversion of the condition. An energy-unrestricted Mediterranean diet may be useful in reducing the risks of central obesity and hyperglycemia in people at high risk of cardiovascular disease. Trial registration: ClinicalTrials.gov, no. ISRCTN35739639.Metabolic syndrome is a cluster of 3 or more related cardiometabolic risk factors: central obesity (determined by waist circumference), hypertension, hypertriglyceridemia, low plasma high-density lipoprotein (HDL) cholesterol levels and hyperglycemia. Having the syndrome increases a person’s risk for type 2 diabetes and cardiovascular disease.1,2 In addition, the condition is associated with increased morbidity and all-cause mortality.1,35 The worldwide prevalence of metabolic syndrome in adults approaches 25%68 and increases with age,7 especially among women,8,9 making it an important public health issue.Several studies have shown that lifestyle modifications,10 such as increased physical activity,11 adherence to a healthy diet12,13 or weight loss,1416 are associated with reversion of the metabolic syndrome and its components. However, little information exists as to whether changes in the overall dietary pattern without weight loss might also be effective in preventing and managing the condition.The Mediterranean diet is recognized as one of the healthiest dietary patterns. It has shown benefits in patients with cardiovascular disease17,18 and in the prevention and treatment of related conditions, such as diabetes,1921 hypertension22,23 and metabolic syndrome.24Several cross-sectional2529 and prospective3032 epidemiologic studies have suggested an inverse association between adherence to the Mediterranean diet and the prevalence or incidence of metabolic syndrome. Evidence from clinical trials has shown that an energy-restricted Mediterranean diet33 or adopting a Mediterranean diet after weight loss34 has a beneficial effect on metabolic syndrome. However, these studies did not determine whether the effect could be attributed to the weight loss or to the diets themselves.Seminal data from the PREDIMED (PREvención con DIeta MEDiterránea) study suggested that adherence to a Mediterranean diet supplemented with nuts reversed metabolic syndrome more so than advice to follow a low-fat diet.35 However, the report was based on data from only 1224 participants followed for 1 year. We have analyzed the data from the final PREDIMED cohort after a median follow-up of 4.8 years to determine the long-term effects of a Mediterranean diet on metabolic syndrome.  相似文献   

4.
Background:Rates of imaging for low-back pain are high and are associated with increased health care costs and radiation exposure as well as potentially poorer patient outcomes. We conducted a systematic review to investigate the effectiveness of interventions aimed at reducing the use of imaging for low-back pain.Methods:We searched MEDLINE, Embase, CINAHL and the Cochrane Central Register of Controlled Trials from the earliest records to June 23, 2014. We included randomized controlled trials, controlled clinical trials and interrupted time series studies that assessed interventions designed to reduce the use of imaging in any clinical setting, including primary, emergency and specialist care. Two independent reviewers extracted data and assessed risk of bias. We used raw data on imaging rates to calculate summary statistics. Study heterogeneity prevented meta-analysis.Results:A total of 8500 records were identified through the literature search. Of the 54 potentially eligible studies reviewed in full, 7 were included in our review. Clinical decision support involving a modified referral form in a hospital setting reduced imaging by 36.8% (95% confidence interval [CI] 33.2% to 40.5%). Targeted reminders to primary care physicians of appropriate indications for imaging reduced referrals for imaging by 22.5% (95% CI 8.4% to 36.8%). Interventions that used practitioner audits and feedback, practitioner education or guideline dissemination did not significantly reduce imaging rates. Lack of power within some of the included studies resulted in lack of statistical significance despite potentially clinically important effects.Interpretation:Clinical decision support in a hospital setting and targeted reminders to primary care doctors were effective interventions in reducing the use of imaging for low-back pain. These are potentially low-cost interventions that would substantially decrease medical expenditures associated with the management of low-back pain.Current evidence-based clinical practice guidelines recommend against the routine use of imaging in patients presenting with low-back pain.13 Despite this, imaging rates remain high,4,5 which indicates poor concordance with these guidelines.6,7Unnecessary imaging for low-back pain has been associated with poorer patient outcomes, increased radiation exposure and higher health care costs.8 No short- or long-term clinical benefits have been shown with routine imaging of the low back, and the diagnostic value of incidental imaging findings remains uncertain.912 A 2008 systematic review found that imaging accounted for 7% of direct costs associated with low-back pain, which in 1998 translated to more than US$6 billion in the United States and £114 million in the United Kingdom.13 Current costs are likely to be substantially higher, with an estimated 65% increase in spine-related expenditures between 1997 and 2005.14Various interventions have been tried for reducing imaging rates among people with low-back pain. These include strategies targeted at the practitioner such as guideline dissemination,1517 education workshops,18,19 audit and feedback of imaging use,7,20,21 ongoing reminders7 and clinical decision support.2224 It is unclear which, if any, of these strategies are effective.25 We conducted a systematic review to investigate the effectiveness of interventions designed to reduce imaging rates for the management of low-back pain.  相似文献   

5.

Background

Abdominal visceral adiposity in early pregnancy has been associated with impaired glucose tolerance in later pregnancy. The “hypertriglyceridemic waist” phenotype (i.e., abdominal obesity in combination with hyper-triglyceridemia) is a clinical marker of visceral obesity. Our study aimed to assess the association between the hyper-triglyceridemic-waist phenotype in early pregnancy and glucose intolerance in later pregnancy.

Methods

Plasma triglycerides and waist girth were measured at 11–14 weeks of gestation among 144 white pregnant women. Glycemia was measured following a 75-g oral glucose tolerance test performed at 24–28 weeks of gestation.

Results

A waist girth greater than 85 cm in combination with a triglyceride level ≥ 1.7 mmol/L in the first trimester was associated with an increased risk of two-hour glucose ≥ 7.8 mmol/L following the 75-g oral glucose tolerance test (odds ratio [OR] 6.1, p = 0.002). This risk remained significant even after we controlled for maternal age, fasting glucose at first trimester and previous history of gestational diabetes (OR 4.7, p = 0.02).

Interpretation

Measurement of waist girth in combination with measurement of triglyceride concentrations in the first trimester of pregnancy could improve early screening for gestational glucose intolerance.Early and accessible screening tools for gestational diabetes mellitus are needed to improve pregnancy-related outcomes for women and children.1 Diagnostic tools currently used for gestational diabetes (most commonly the fasting oral glucose tolerance test) are expensive, time-consuming, uncomfortable for pregnant women and do not allow diagnosis before the end of the second trimester of pregnancy.2 Some earlier screening tools have been suggested, such as a 50-g glucose load followed by a one-hour plasma glucose analysis that can be performed at any time of the day and early in pregnancy. Although this test is less uncomfortable than the fasting oral glucose tolerance test, it remains time-consuming and unpleasant for women. Its use is therefore restricted mainly to women at risk for gestational diabetes.Several attempts have been made to simplify these tests to promote wider use. However, no results have yet identified the means to carry out early and widely accessible screening. Interesting positive and negative predictive values have been obtained for first-trimester fasting glucose and insulin for subsequent gestational diabetes expression.35 However, these studies have been performed among women at risk of gestational diabetes and should be replicated in samples of women with various risk levels. Regardless of pregnancy, a person with normal fasting glucose can still meet the criteria for glucose intolerance or diabetes during an oral glucose tolerance test. Several studies have also shown that maternal pre-pregnancy obesity is directly associated with an increased risk of gestational diabetes. However, results vary widely across studies, possibly because of lack of specificity of tools for measurement of adiposity.6 A recent study has suggested that abdominal visceral adiposity, specifically, is associated with risk of gestational diabetes expression.7The hypertriglyceridemic-waist phenotype has been identified as a simple, easily available clinical marker of visceral obesity and related metabolic abnormalities.8,9 It is defined as the simultaneous presence of abdominal obesity (i.e., a waist girth greater than 85 cm in women or greater than 90 cm in men) and hypertriglyceridemia (i.e., a triglyceride concentration ≥ 2 mmol/L). The aim of our study was to document the association between the presence of the hypertriglyceridemic-waist phenotype in early pregnancy and impaired glucose tolerance in later pregnancy.  相似文献   

6.

Background

Children with developmental coordination disorder have been found to be less likely to participate in physical activities and therefore may be at increased risk of overweight and obesity. We examined the longitudinal course of relative weight and waist circumference among school-aged children with and without possible developmental coordination disorder.

Methods

We received permission from 75 (83%) of 92 schools in southwestern Ontario, Canada, to enrol children in the fourth grade (ages 9 and 10 at baseline). Informed consent from the parents of 2278 (95.8%) of 2378 children in these schools was obtained at baseline. The main outcome measures were body mass index (BMI) and waist circumference. Children were followed up over two years, from the spring of 2005 to the spring of 2007.

Results

Over the course of the study, we identified 111 children (46 boys and 65 girls) who had possible developmental coordination disorder. These children had a higher mean BMI and waist circumference at baseline than did those without the disorder; these differences persisted or increased slightly over time. Children with possible developmental coordination disorder were also at persistently greater risk of overweight (odds ratio [OR] 3.44, 95% confidence interval [CI] 2.34–5.07) and obesity (OR 4.00, 95% CI 2.57–6.21) over the course of the study.

Interpretation

Our findings showed that children with possible developmental coordination disorder were at greater risk of overweight and obesity than children without the disorder. This risk did not diminish over the study period.Developmental coordination disorder is a neuro-developmental condition that affects 5%–6% of school-aged children.1 Children with the disorder present with a range of coordination difficulties, including fine and gross motor problems,2 all of which interfere with normal daily activities, recreational activities and academic performance skills such as handwriting.3 Developmental coordination disorder is diagnosed when existing neurologic and physical problems are ruled out as the cause of motor coordination difficulties and intellectual development has been taken into consideration (Box 1).1,4 The clinical implications of a diagnosis have been described previously.5

Box 1.?Diagnostic criteria for developmental coordination disorder

  1. Performance in daily activities that require motor coordination is substantially below that expected given the person’s chronological age and measured intelligence. This may be manifested by marked delays in achieving motor milestones (e.g., walking, crawling, sitting), dropping things, “clumsiness,” poor performance in sports or poor handwriting.
  2. The disturbance in criterion A significantly interferes with academic achievement or activities of daily living.
  3. The disturbance is not due to a general medical condition (e.g., cerebral palsy, hemiplegia or muscular dystrophy) and does not meet criteria for a pervasive developmental disorder.
  4. If mental retardation is present, the motor difficulties are in excess of those usually associated with it.
Reproduced with permission from the Diagnostic and Statistical Manual of Mental Disorders, Text Revision, Fourth Edition.4 Copyright © 2000 American Psychiatric Association.Because children with developmental coordination disorder have been found to be less likely to participate in physical activities,6 it has been hypothesized that this condition may be a risk factor for obesity.7 Only a few studies have examined the association between motor coordination problems and overweight or obesity in children.710 Moreover, the literature in this area is limited in two key respects. First, previous research has relied almost exclusively on body mass index (BMI) as the outcome measure.810 Although important, BMI is not the only indicator of relative weight and has been shown to be weakly correlated with fat mass in young children.11,12 Waist circumference provides valid estimates of abdominal fat in pediatric populations13 and appears to be a stronger predictor of cardiovascular risk among children.14,15 Second, previous research in this area has been limited to cross-sectional data, with two notable exceptions.8,9 However, results from these two prospective studies were mixed: one study showed a significant effect of motor coordination on weight,8 the other did not.9Our objective was to document several measures of adiposity over time in children with and without developmental coordination disorder.  相似文献   

7.

Background:

The gut microbiota is essential to human health throughout life, yet the acquisition and development of this microbial community during infancy remains poorly understood. Meanwhile, there is increasing concern over rising rates of cesarean delivery and insufficient exclusive breastfeeding of infants in developed countries. In this article, we characterize the gut microbiota of healthy Canadian infants and describe the influence of cesarean delivery and formula feeding.

Methods:

We included a subset of 24 term infants from the Canadian Healthy Infant Longitudinal Development (CHILD) birth cohort. Mode of delivery was obtained from medical records, and mothers were asked to report on infant diet and medication use. Fecal samples were collected at 4 months of age, and we characterized the microbiota composition using high-throughput DNA sequencing.

Results:

We observed high variability in the profiles of fecal microbiota among the infants. The profiles were generally dominated by Actinobacteria (mainly the genus Bifidobacterium) and Firmicutes (with diverse representation from numerous genera). Compared with breastfed infants, formula-fed infants had increased richness of species, with overrepresentation of Clostridium difficile. Escherichia–Shigella and Bacteroides species were underrepresented in infants born by cesarean delivery. Infants born by elective cesarean delivery had particularly low bacterial richness and diversity.

Interpretation:

These findings advance our understanding of the gut microbiota in healthy infants. They also provide new evidence for the effects of delivery mode and infant diet as determinants of this essential microbial community in early life.The human body harbours trillions of microbes, known collectively as the “human microbiome.” By far the highest density of commensal bacteria is found in the digestive tract, where resident microbes outnumber host cells by at least 10 to 1. Gut bacteria play a fundamental role in human health by promoting intestinal homeostasis, stimulating development of the immune system, providing protection against pathogens, and contributing to the processing of nutrients and harvesting of energy.1,2 The disruption of the gut microbiota has been linked to an increasing number of diseases, including inflammatory bowel disease, necrotizing enterocolitis, diabetes, obesity, cancer, allergies and asthma.1 Despite this evidence and a growing appreciation for the integral role of the gut microbiota in lifelong health, relatively little is known about the acquisition and development of this complex microbial community during infancy.3Two of the best-studied determinants of the gut microbiota during infancy are mode of delivery and exposure to breast milk.4,5 Cesarean delivery perturbs normal colonization of the infant gut by preventing exposure to maternal microbes, whereas breastfeeding promotes a “healthy” gut microbiota by providing selective metabolic substrates for beneficial bacteria.3,5 Despite recommendations from the World Health Organization,6 the rate of cesarean delivery has continued to rise in developed countries and rates of breastfeeding decrease substantially within the first few months of life.7,8 In Canada, more than 1 in 4 newborns are born by cesarean delivery, and less than 15% of infants are exclusively breastfed for the recommended duration of 6 months.9,10 In some parts of the world, elective cesarean deliveries are performed by maternal request, often because of apprehension about pain during childbirth, and sometimes for patient–physician convenience.11The potential long-term consequences of decisions regarding mode of delivery and infant diet are not to be underestimated. Infants born by cesarean delivery are at increased risk of asthma, obesity and type 1 diabetes,12 whereas breastfeeding is variably protective against these and other disorders.13 These long-term health consequences may be partially attributable to disruption of the gut microbiota.12,14Historically, the gut microbiota has been studied with the use of culture-based methodologies to examine individual organisms. However, up to 80% of intestinal microbes cannot be grown in culture.3,15 New technology using culture-independent DNA sequencing enables comprehensive detection of intestinal microbes and permits simultaneous characterization of entire microbial communities. Multinational consortia have been established to characterize the “normal” adult microbiome using these exciting new methods;16 however, these methods have been underused in infant studies. Because early colonization may have long-lasting effects on health, infant studies are vital.3,4 Among the few studies of infant gut microbiota using DNA sequencing, most were conducted in restricted populations, such as infants delivered vaginally,17 infants born by cesarean delivery who were formula-fed18 or preterm infants with necrotizing enterocolitis.19Thus, the gut microbiota is essential to human health, yet the acquisition and development of this microbial community during infancy remains poorly understood.3 In the current study, we address this gap in knowledge using new sequencing technology and detailed exposure assessments20 of healthy Canadian infants selected from a national birth cohort to provide representative, comprehensive profiles of gut microbiota according to mode of delivery and infant diet.  相似文献   

8.
Background:Otitis media with effusion is a common problem that lacks an evidence-based nonsurgical treatment option. We assessed the clinical effectiveness of treatment with a nasal balloon device in a primary care setting.Methods:We conducted an open, pragmatic randomized controlled trial set in 43 family practices in the United Kingdom. Children aged 4–11 years with a recent history of ear symptoms and otitis media with effusion in 1 or both ears, confirmed by tympanometry, were allocated to receive either autoinflation 3 times daily for 1–3 months plus usual care or usual care alone. Clearance of middle-ear fluid at 1 and 3 months was assessed by experts masked to allocation.Results:Of 320 children enrolled, those receiving autoinflation were more likely than controls to have normal tympanograms at 1 month (47.3% [62/131] v. 35.6% [47/132]; adjusted relative risk [RR] 1.36, 95% confidence interval [CI] 0.99 to 1.88) and at 3 months (49.6% [62/125] v. 38.3% [46/120]; adjusted RR 1.37, 95% CI 1.03 to 1.83; number needed to treat = 9). Autoinflation produced greater improvements in ear-related quality of life (adjusted between-group difference in change from baseline in OMQ-14 [an ear-related measure of quality of life] score −0.42, 95% CI −0.63 to −0.22). Compliance was 89% at 1 month and 80% at 3 months. Adverse events were mild, infrequent and comparable between groups.Interpretation:Autoinflation in children aged 4–11 years with otitis media with effusion is feasible in primary care and effective both in clearing effusions and improving symptoms and ear-related child and parent quality of life. Trial registration: ISRCTN, No. 55208702.Otitis media with effusion, also known as glue ear, is an accumulation of fluid in the middle ear, without symptoms or signs of an acute ear infection. It is often associated with viral infection.13 The prevalence rises to 46% in children aged 4–5 years,4 when hearing difficulty, other ear-related symptoms and broader developmental concerns often bring the condition to medical attention.3,5,6 Middle-ear fluid is associated with conductive hearing losses of about 15–45 dB HL.7 Resolution is clinically unpredictable,810 with about a third of cases showing recurrence.11 In the United Kingdom, about 200 000 children with the condition are seen annually in primary care.12,13 Research suggests some children seen in primary care are as badly affected as those seen in hospital.7,9,14,15 In the United States, there were 2.2 million diagnosed episodes in 2004, costing an estimated $4.0 billion.16 Rates of ventilation tube surgery show variability between countries,1719 with a declining trend in the UK.20Initial clinical management consists of reasonable temporizing or delay before considering surgery.13 Unfortunately, all available medical treatments for otitis media with effusion such as antibiotics, antihistamines, decongestants and intranasal steroids are ineffective and have unwanted effects, and therefore cannot be recommended.2123 Not only are antibiotics ineffective, but resistance to them poses a major threat to public health.24,25 Although surgery is effective for a carefully selected minority,13,26,27 a simple low-cost, nonsurgical treatment option could benefit a much larger group of symptomatic children, with the purpose of addressing legitimate clinical concerns without incurring excessive delays.Autoinflation using a nasal balloon device is a low-cost intervention with the potential to be used more widely in primary care, but current evidence of its effectiveness is limited to several small hospital-based trials28 that found a higher rate of tympanometric resolution of ear fluid at 1 month.2931 Evidence of feasibility and effectiveness of autoinflation to inform wider clinical use is lacking.13,28 Thus we report here the findings of a large pragmatic trial of the clinical effectiveness of nasal balloon autoinflation in a spectrum of children with clinically confirmed otitis media with effusion identified from primary care.  相似文献   

9.

Background:

Limited evidence suggests that adiposity and lack of physical activity may increase the risk of chronic obstructive pulmonary disease (COPD). We investigated the relation of body size and physical activity with incidence of COPD.

Methods:

We obtained data on anthropometric measurements and physical activity from 113 279 participants in the National Institutes of Health–AARP Diet and Health Study who reported no diagnosis of COPD at baseline (1995–1996). We estimated associations between these measurements and subsequent diagnosis of COPD between 1996 and 2006, with extensive adjustment for smoking and other potentially confounding variables.

Results:

Participants reported 3648 new COPD diagnoses during follow-up. The incidence of COPD was higher in both severely obese (body mass index [BMI]D≥ 35) and underweight (BMID< 18.5) participants, but after adjustment for waist circumference, only underweight remained positively associated with COPD (relative risk [RR]D1.56, 95% confidence interval [CI]D1.15–2.11). Larger waist circumference (highest v. normal categories, adjusted RRD1.72, 95% CID1.37–2.16) and higher waist–hip ratio (highest v. normal categories, adjusted RRD1.46, 95% CID1.23–1.73) were also positively associated with COPD. In contrast, hip circumference (highest v. normal categories, adjusted RR 0.78, 95% CI 0.62–0.98) and physical activity (≥ 5 v. 0 times/wk, adjusted RRD0.71, 95% CID0.63–0.79) were inversely associated with COPD.

Interpretation:

Obesity, in particular abdominal adiposity, was associated with an increased risk of COPD, and increased hip circumference and physical activity were associated with a decreased risk of COPD. These findings suggest that following guidelines for a healthy body weight, body shape and physical activity decrease the risk of COPD.Chronic obstructive pulmonary disease (COPD) is a progressive, irreversible condition that severely affects quality of life1 and ability to work.2 Direct and indirect annual costs of COPD, including inpatient and outpatient care, medication and loss of productivity, sum to $50 billion in the United States3 and R39 billion (about US$50 billion) in Europe.4Chronic obstructive pulmonary disease may be prevented by avoidance of tobacco smoke, occupational dust and other environmental air pollution.5 Body mass index (BMI) and physical activity are established correlates of disease progression among patients with COPD,6,7 but data relating body size or physical activity to incident COPD are sparse. The few studies available are based on small samples and show inverse relations of both BMI8,9 and physical activity10,11 to incidence of COPD. Data are lacking regarding waist or hip circumference in relation to COPD incidence. We therefore examined BMI, waist circumference, hip circumference, waist–hip ratio and physical activity in relation to incidence of COPD in a large cohort of women and men in the US.  相似文献   

10.
Schultz AS  Finegan B  Nykiforuk CI  Kvern MA 《CMAJ》2011,183(18):E1334-E1344

Background:

Many hospitals have adopted smoke-free policies on their property. We examined the consequences of such polices at two Canadian tertiary acute-care hospitals.

Methods:

We conducted a qualitative study using ethnographic techniques over a six-month period. Participants (n = 186) shared their perspectives on and experiences with tobacco dependence and managing the use of tobacco, as well as their impressions of the smoke-free policy. We interviewed inpatients individually from eight wards (n = 82), key policy-makers (n = 9) and support staff (n = 14) and held 16 focus groups with health care providers and ward staff (n = 81). We also reviewed ward documents relating to tobacco dependence and looked at smoking-related activities on hospital property.

Results:

Noncompliance with the policy and exposure to secondhand smoke were ongoing concerns. Peoples’ impressions of the use of tobacco varied, including divergent opinions as to whether such use was a bad habit or an addiction. Treatment for tobacco dependence and the management of symptoms of withdrawal were offered inconsistently. Participants voiced concerns over patient safety and leaving the ward to smoke.

Interpretation:

Policies mandating smoke-free hospital property have important consequences beyond noncompliance, including concerns over patient safety and disruptions to care. Without adequately available and accessible support for withdrawal from tobacco, patients will continue to face personal risk when they leave hospital property to smoke.Canadian cities and provinces have passed smoking bans with the goal of reducing people’s exposure to secondhand smoke in workplaces, public spaces and on the property adjacent to public buildings.1,2 In response, Canadian health authorities and hospitals began implementing policies mandating smoke-free hospital property, with the goals of reducing the exposure of workers, patients and visitors to tobacco smoke while delivering a public health message about the dangers of smoking.25 An additional anticipated outcome was the reduced use of tobacco among patients and staff. The impetuses for adopting smoke-free policies include public support for such legislation and the potential for litigation for exposure to second-hand smoke.2,4Tobacco use is a modifiable risk factor associated with a variety of cancers, cardiovascular diseases and respiratory conditions.611 Patients in hospital who use tobacco tend to have more surgical complications and exacerbations of acute and chronic health conditions than patients who do not use tobacco.611 Any policy aimed at reducing exposure to tobacco in hospitals is well supported by evidence, as is the integration of interventions targetting tobacco dependence.12 Unfortunately, most of the nearly five million Canadians who smoke will receive suboptimal treatment,13 as the routine provision of interventions for tobacco dependence in hospital settings is not a practice norm.1416 In smoke-free hospitals, two studies suggest minimal support is offered for withdrawal, 17,18 and one reports an increased use of nicotine-replacement therapy after the implementation of the smoke-free policy.19Assessments of the effectiveness of smoke-free policies for hospital property tend to focus on noncompliance and related issues of enforcement.17,20,21 Although evidence of noncompliance and litter on hospital property2,17,20 implies ongoing exposure to tobacco smoke, half of the participating hospital sites in one study reported less exposure to tobacco smoke within hospital buildings and on the property.18 In addition, there is evidence to suggest some decline in smoking among staff.18,19,21,22We sought to determine the consequences of policies mandating smoke-free hospital property in two Canadian acute-care hospitals by eliciting lived experiences of the people faced with enacting the policies: patients and health care providers. In addition, we elicited stories from hospital support staff and administrators regarding the policies.  相似文献   

11.

Background

Fractures have largely been assessed by their impact on quality of life or health care costs. We conducted this study to evaluate the relation between fractures and mortality.

Methods

A total of 7753 randomly selected people (2187 men and 5566 women) aged 50 years and older from across Canada participated in a 5-year observational cohort study. Incident fractures were identified on the basis of validated self-report and were classified by type (vertebral, pelvic, forearm or wrist, rib, hip and “other”). We subdivided fracture groups by the year in which the fracture occurred during follow-up; those occurring in the fourth and fifth years were grouped together. We examined the relation between the time of the incident fracture and death.

Results

Compared with participants who had no fracture during follow-up, those who had a vertebral fracture in the second year were at increased risk of death (adjusted hazard ratio [HR] 2.7, 95% confidence interval [CI] 1.1–6.6); also at risk were those who had a hip fracture during the first year (adjusted HR 3.2, 95% CI 1.4–7.4). Among women, the risk of death was increased for those with a vertebral fracture during the first year (adjusted HR 3.7, 95% CI 1.1–12.8) or the second year of follow-up (adjusted HR 3.2, 95% CI 1.2–8.1). The risk of death was also increased among women with hip fracture during the first year of follow-up (adjusted HR 3.0, 95% CI 1.0–8.7).

Interpretation

Vertebral and hip fractures are associated with an increased risk of death. Interventions that reduce the incidence of these fractures need to be implemented to improve survival.Osteoporosis-related fractures are a major health concern, affecting a growing number of individuals worldwide. The burden of fracture has largely been assessed by the impact on health-related quality of life and health care costs.1,2 Fractures can also be associated with death. However, trials that have examined the relation between fractures and mortality have had limitations that may influence their results and the generalizability of the studies, including small samples,3,4 the examination of only 1 type of fracture,410 the inclusion of only women,8,11 the enrolment of participants from specific areas (i.e., hospitals or certain geographic regions),3,4,7,8,10,12 the nonrandom selection of participants311 and the lack of statistical adjustment for confounding factors that may influence mortality.3,57,12We evaluated the relation between incident fractures and mortality over a 5-year period in a cohort of men and women 50 years of age and older. In addition, we examined whether other characteristics of participants were risk factors for death.  相似文献   

12.
13.

Background:

Brief interventions delivered by family physicians to address excessive alcohol use among adult patients are effective. We conducted a study to determine whether such an intervention would be similarly effective in reducing binge drinking and excessive cannabis use among young people.

Methods:

We conducted a cluster randomized controlled trial involving 33 family physicians in Switzerland. Physicians in the intervention group received training in delivering a brief intervention to young people during the consultation in addition to usual care. Physicians in the control group delivered usual care only. Consecutive patients aged 15–24 years were recruited from each practice and, before the consultation, completed a confidential questionnaire about their general health and substance use. Patients were followed up at 3, 6 and 12 months after the consultation. The primary outcome measure was self-reported excessive substance use (≥ 1 episode of binge drinking, or ≥ 1 joint of cannabis per week, or both) in the past 30 days.

Results:

Of the 33 participating physicians, 17 were randomly allocated to the intervention group and 16 to the control group. Of the 594 participating patients, 279 (47.0%) identified themselves as binge drinkers or excessive cannabis users, or both, at baseline. Excessive substance use did not differ significantly between patients whose physicians were in the intervention group and those whose physicians were in the control group at any of the follow-up points (odds ratio [OR] and 95% confidence interval [CI] at 3 months: 0.9 [0.6–1.4]; at 6 mo: 1.0 [0.6–1.6]; and at 12 mo: 1.1 [0.7–1.8]). The differences between groups were also nonsignificant after we re stricted the analysis to patients who reported excessive substance use at baseline (OR 1.6, 95% CI 0.9–2.8, at 3 mo; OR 1.7, 95% CI 0.9–3.2, at 6 mo; and OR 1.9, 95% CI 0.9–4.0, at 12 mo).

Interpretation:

Training family physicians to use a brief intervention to address excessive substance use among young people was not effective in reducing binge drinking and excessive cannabis use in this patient population. Trial registration: Australian New Zealand Clinical Trials Registry, no. ACTRN12608000432314.Most health-compromising behaviours begin in adolescence.1 Interventions to address these behaviours early are likely to bring long-lasting benefits.2 Harmful use of alcohol is a leading factor associated with premature death and disability worldwide, with a disproportionally high impact on young people (aged 10–24 yr).3,4 Similarly, early cannabis use can have adverse consequences that extend into adulthood.58In adolescence and early adulthood, binge drinking on at least a monthly basis is associated with an increased risk of adverse outcomes later in life.912 Although any cannabis use is potentially harmful, weekly use represents a threshold in adolescence related to an increased risk of cannabis (and tobacco) dependence in adulthood.13 Binge drinking affects 30%–50% and excessive cannabis use about 10% of the adolescent and young adult population in Europe and the United States.10,14,15Reducing substance-related harm involves multisectoral approaches, including promotion of healthy child and adolescent development, regulatory policies and early treatment interventions.16 Family physicians can add to the public health messages by personalizing their content within brief interventions.17,18 There is evidence that brief interventions can encourage young people to reduce substance use, yet most studies have been conducted in community settings (mainly educational), emergency services or specialized addiction clinics.1,16 Studies aimed at adult populations have shown favourable effects of brief alcohol interventions, and to some extent brief cannabis interventions, in primary care.1922 These interventions have been recommended for adolescent populations.4,5,16 Yet young people have different modes of substance use and communication styles that may limit the extent to which evidence from adult studies can apply to them.Recently, a systematic review of brief interventions to reduce alcohol use in adolescents identified only 1 randomized controlled trial in primary care.23 The tested intervention, not provided by family physicians but involving audio self-assessment, was ineffective in reducing alcohol use in exposed adolescents.24 Sanci and colleagues showed that training family physicians to address health-risk behaviours among adolescents was effective in improving provider performance, but the extent to which this translates into improved outcomes remains unknown.25,26 Two nonrandomized studies suggested screening for substance use and brief advice by family physicians could favour reduced alcohol and cannabis use among adolescents,27,28 but evidence from randomized trials is lacking.29We conducted the PRISM-Ado (Primary care Intervention Addressing Substance Misuse in Adolescents) trial, a cluster randomized controlled trial of the effectiveness of training family physicians to deliver a brief intervention to address binge drinking and excessive cannabis use among young people.  相似文献   

14.

Background:

Recent warnings from Health Canada regarding codeine for children have led to increased use of nonsteroidal anti-inflammatory drugs and morphine for common injuries such as fractures. Our objective was to determine whether morphine administered orally has superior efficacy to ibuprofen in fracture-related pain.

Methods:

We used a parallel group, randomized, blinded superiority design. Children who presented to the emergency department with an uncomplicated extremity fracture were randomly assigned to receive either morphine (0.5 mg/kg orally) or ibuprofen (10 mg/kg) for 24 hours after discharge. Our primary outcome was the change in pain score using the Faces Pain Scale — Revised (FPS-R). Participants were asked to record pain scores immediately before and 30 minutes after receiving each dose.

Results:

We analyzed data from 66 participants in the morphine group and 68 participants in the ibuprofen group. For both morphine and ibuprofen, we found a reduction in pain scores (mean pre–post difference ± standard deviation for dose 1: morphine 1.5 ± 1.2, ibuprofen 1.3 ± 1.0, between-group difference [δ] 0.2 [95% confidence interval (CI) −0.2 to 0.6]; dose 2: morphine 1.3 ± 1.3, ibuprofen 1.3 ± 0.9, δ 0 [95% CI −0.4 to 0.4]; dose 3: morphine 1.3 ± 1.4, ibuprofen 1.4 ± 1.1, δ −0.1 [95% CI −0.7 to 0.4]; and dose 4: morphine 1.5 ± 1.4, ibuprofen 1.1 ± 1.2, δ 0.4 [95% CI −0.2 to 1.1]). We found no significant differences in the change in pain scores between morphine and ibuprofen between groups at any of the 4 time points (p = 0.6). Participants in the morphine group had significantly more adverse effects than those in the ibuprofen group (56.1% v. 30.9%, p < 0.01).

Interpretation:

We found no significant difference in analgesic efficacy between orally administered morphine and ibuprofen. However, morphine was associated with a significantly greater number of adverse effects. Our results suggest that ibuprofen remains safe and effective for outpatient pain management in children with uncomplicated fractures. Trial registration: ClinicalTrials.gov, no. NCT01690780.There is ample evidence that analgesia is underused,1 underprescribed,2 delayed in its administration2 and suboptimally dosed 3 in clinical settings. Children are particularly susceptible to suboptimal pain management4 and are less likely to receive opioid analgesia.5 Untreated pain in childhood has been reported to lead to short-term problems such as slower healing6 and to long-term issues such as anxiety, needle phobia,7 hyperesthesia8 and fear of medical care.9 The American Academy of Pediatrics has reaffirmed its advocacy for the appropriate use of analgesia for children with acute pain.10Fractures constitute between 10% and 25% of all injuries.11 The most severe pain after an injury occurs within the first 48 hours, with more than 80% of children showing compromise in at least 1 functional area.12 Low rates of analgesia have been reported after discharge from hospital.13 A recently improved understanding of the pharmacogenomics of codeine has raised significant concerns about its safety,14,15 and has led to a Food and Drug Administration boxed warning16 and a Health Canada advisory17 against its use. Although ibuprofen has been cited as the most common agent used by caregivers to treat musculoskeletal pain,12,13 there are concerns that its use as monotherapy may lead to inadequate pain management.6,18 Evidence suggests that orally administered morphine13 and other opioids are increasingly being prescribed.19 However, evidence for the oral administration of morphine in acute pain management is limited.20,21 Thus, additional studies are needed to address this gap in knowledge and provide a scientific basis for outpatient analgesic choices in children. Our objective was to assess if orally administered morphine is superior to ibuprofen in relieving pain in children with nonoperative fractures.  相似文献   

15.
Celia Rodd  Atul K. Sharma 《CMAJ》2016,188(13):E313-E320
Background:Previous studies have shown an increase in the prevalence of overweight and obesity among Canadian children from 23.3% to 34.7% during 1978–2004. We examined the most recent trends by applying current definitions of overweight and obesity based on World Health Organization (WHO) body mass index (BMI) thresholds and recently validated norms for waist circumference and waist:height ratio.Methods:We examined directly measured height and weight data from the Canadian Community Health Survey (2004–2005) and the Canadian Health Measures Survey (2009–2013). We calculated z scores for BMI, height and weight based on the 2014 WHO growth charts for Canada, including the new extension of weight-for-age beyond 10 years. To calculate z scores for waist circumference and waist:height ratios, we used new charts from the reference population in the US NHANES III (National Health and Nutrition Examination Survey, 1988–1994).Results:Data were available for 14 014 children aged 3–19 years for the period 2004–2013. We observed a decline in the prevalence of overweight or obesity, from 30.7% (95% confidence interval [CI] 29.7% to 31.6%) to 27.0% (95% CI 25.3% to 28.7%) (p < 0.001) and stabilization in the prevalence of obesity at about 13%. These trends persisted after we adjusted for age, sex and race/ethnicity. Although they declined, the median z scores for BMI, weight and height were positive and higher than those in the WHO reference population. The z scores for waist circumference and waist:height ratio were negative, which indicated that the Canadian children had less central adiposity than American children in historic or contemporary NHANES cohorts.Interpretation:After a period of dramatic growth, BMI z scores and the prevalence of overweight or obesity among Canadian children decreased from 2004 to 2013, which attests to progress against this important public health challenge.Ongoing pan-Canadian surveys such as the Canadian Community Health Survey (CCHS) and Canadian Health Measures Survey (CHMS) are important to evaluate the health of our population using representative national samples.1,2 Self-reported heights and weights replaced direct measurement during 1978–2004, which underestimated true rates of overweight and obesity.3 A subsequent comparison of directly measured heights and weights during the same period showed an alarming increase in the prevalence of overweight or obesity among Canadian children aged 2–17 years, from 23.3% (95% confidence interval [CI] 20.5% to 26.0%) to 34.7% (95% CI 33.0% to 36.4%) based on the new World Health Organization (WHO) definitions.1In Canada, the definitions of overweight and obesity changed with the introduction of the 2010 WHO growth charts for Canada.4,5 Previous definitions were based on body mass index (BMI) percentiles from the 2000 US Centers for Disease Control and Prevention (CDC) growth charts.6 In addition to revising these percentile thresholds, the WHO charts were based on a different reference population; as a result, the proportion of Canadian children classified as overweight or obese increased with the introduction of the new WHO charts.1,7,8 Moreover, the absolute percentile thresholds now vary by age, with toddlers (2 to ≤ 5 yr) having higher thresholds to define overweight and obesity than older children (age > 5 to 19 yr).4Results from the United States have shown a decline in obesity rates among toddlers and a plateau in rates among older children;9,10 stabilization has also been noted in other jurisdictions (e.g., Germany and Australia).1116 We undertook this study to determine the most recent trends in the prevalence of overweight and obesity among Canadian children using the current WHO weight charts for Canada applied to a representative sample of children.  相似文献   

16.

Background:

Previous studies have suggested that the immunochemical fecal occult blood test has superior specificity for detecting bleeding in the lower gastrointestinal tract even if bleeding occurs in the upper tract. We conducted a large population-based study involving asymptomatic adults in Taiwan, a population with prevalent upper gastrointestinal lesions, to confirm this claim.

Methods:

We conducted a prospective cohort study involving asymptomatic people aged 18 years or more in Taiwan recruited to undergo an immunochemical fecal occult blood test, colonoscopy and esophagogastroduodenoscopy between August 2007 and July 2009. We compared the prevalence of lesions in the lower and upper gastrointestinal tracts between patients with positive and negative fecal test results. We also identified risk factors associated with a false-positive fecal test result.

Results:

Of the 2796 participants, 397 (14.2%) had a positive fecal test result. The sensitivity of the test for predicting lesions in the lower gastrointestinal tract was 24.3%, the specificity 89.0%, the positive predictive value 41.3%, the negative predictive value 78.7%, the positive likelihood ratio 2.22, the negative likelihood ratio 0.85 and the accuracy 73.4%. The prevalence of lesions in the lower gastrointestinal tract was higher among those with a positive fecal test result than among those with a negative result (41.3% v. 21.3%, p < 0.001). The prevalence of lesions in the upper gastrointestinal tract did not differ significantly between the two groups (20.7% v. 17.5%, p = 0.12). Almost all of the participants found to have colon cancer (27/28, 96.4%) had a positive fecal test result; in contrast, none of the three found to have esophageal or gastric cancer had a positive fecal test result (p < 0.001). Among those with a negative finding on colonoscopy, the risk factors associated with a false-positive fecal test result were use of antiplatelet drugs (adjusted odds ratio [OR] 2.46, 95% confidence interval [CI] 1.21–4.98) and a low hemoglobin concentration (adjusted OR 2.65, 95% CI 1.62–4.33).

Interpretation:

The immunochemical fecal occult blood test was specific for predicting lesions in the lower gastrointestinal tract. However, the test did not adequately predict lesions in the upper gastrointestinal tract.The fecal occult blood test is a convenient tool to screen for asymptomatic gastrointestinal bleeding.1 When the test result is positive, colonoscopy is the strategy of choice to investigate the source of bleeding.2,3 However, 13%–42% of patients can have a positive test result but a negative colonoscopy,4 and it has not yet been determined whether asymptomatic patients should then undergo evaluation of the upper gastrointestinal tract.Previous studies showed that the frequency of lesions in the upper gastrointestinal tract was comparable or even higher than that of colonic lesions59 and that the use of esophagogastroduodenoscopy may change clinical management.10,11 Some studies showed that evaluation of the upper gastrointestinal tract helped to identify important lesions in symptomatic patients and those with iron deficiency anemia;12,13 however, others concluded that esophagogastroduodenoscopy was unjustified because important findings in the upper gastrointestinal tract were rare1417 and sometimes irrelevant to the results of fecal occult blood testing.1821 This controversy is related to the heterogeneity of study populations and to the limitations of the formerly used guaiac-based fecal occult blood test,520 which was not able to distinguish bleeding in the lower gastrointestinal tract from that originating in the upper tract.The guaiac-based fecal occult blood test is increasingly being replaced by the immunochemical-based test. The latter is recommended for detecting bleeding in the lower gastrointestinal tract because it reacts with human globin, a protein that is digested by enzymes in the upper gastrointestinal tract.22 With this advantage, the occurrence of a positive fecal test result and a negative finding on colonoscopy is expected to decrease.We conducted a population-based study in Taiwan to verify the performance of the immunochemical fecal occult blood test in predicting lesions in the lower gastrointestinal tract and to confirm that results are not confounded by the presence of lesions in the upper tract. In Taiwan, the incidence of colorectal cancer is rapidly increasing, and Helicobacter pylori-related lesions in the upper gastrointestinal tract remain highly prevalent.23 Same-day bidirectional endoscopies are therefore commonly used for cancer screening.24 This screening strategy provides an opportunity to evaluate the performance of the immunochemical fecal occult blood test.  相似文献   

17.

Background:

Polymyalgia rheumatica is one of the most common inflammatory rheumatologic conditions in older adults. Other inflammatory rheumatologic disorders are associated with an excess risk of vascular disease. We investigated whether polymyalgia rheumatica is associated with an increased risk of vascular events.

Methods:

We used the General Practice Research Database to identify patients with a diagnosis of incident polymyalgia rheumatica between Jan. 1, 1987, and Dec. 31, 1999. Patients were matched by age, sex and practice with up to 5 patients without polymyalgia rheumatica. Patients were followed until their first vascular event (cardiovascular, cerebrovascular, peripheral vascular) or the end of available records (May 2011). All participants were free of vascular disease before the diagnosis of polymyalgia rheumatica (or matched date). We used Cox regression models to compare time to first vascular event in patients with and without polymyalgia rheumatica.

Results:

A total of 3249 patients with polymyalgia rheumatica and 12 735 patients without were included in the final sample. Over a median follow-up period of 7.8 (interquartile range 3.3–12.4) years, the rate of vascular events was higher among patients with polymyalgia rheumatica than among those without (36.1 v. 12.2 per 1000 person-years; adjusted hazard ratio 2.6, 95% confidence interval 2.4–2.9). The increased risk of a vascular event was similar for each vascular disease end point. The magnitude of risk was higher in early disease and in patients younger than 60 years at diagnosis.

Interpretation:

Patients with polymyalgia rheumatica have an increased risk of vascular events. This risk is greatest in the youngest age groups. As with other forms of inflammatory arthritis, patients with polymyalgia rheumatica should have their vascular risk factors identified and actively managed to reduce this excess risk.Inflammatory rheumatologic disorders such as rheumatoid arthritis,1,2 systemic lupus erythematosus,2,3 gout,4 psoriatic arthritis2,5 and ankylosing spondylitis2,6 are associated with an increased risk of vascular disease, especially cardiovascular disease, leading to substantial morbidity and premature death.26 Recognition of this excess vascular risk has led to management guidelines advocating screening for and management of vascular risk factors.79Polymyalgia rheumatica is one of the most common inflammatory rheumatologic conditions in older adults,10 with a lifetime risk of 2.4% for women and 1.7% for men.11 To date, evidence regarding the risk of vascular disease in patients with polymyalgia rheumatica is unclear. There are a number of biologically plausible mechanisms between polymyalgia rheumatica and vascular disease. These include the inflammatory burden of the disease,12,13 the association of the disease with giant cell arteritis (causing an inflammatory vasculopathy, which may lead to subclinical arteritis, stenosis or aneurysms),14 and the adverse effects of long-term corticosteroid treatment (e.g., diabetes, hypertension and dyslipidemia).15,16 Paradoxically, however, use of corticosteroids in patients with polymyalgia rheumatica may actually decrease vascular risk by controlling inflammation.17 A recent systematic review concluded that although some evidence exists to support an association between vascular disease and polymyalgia rheumatica,18 the existing literature presents conflicting results, with some studies reporting an excess risk of vascular disease19,20 and vascular death,21,22 and others reporting no association.2326 Most current studies are limited by poor methodologic quality and small samples, and are based on secondary care cohorts, who may have more severe disease, yet most patients with polymyalgia rheumatica receive treatment exclusively in primary care.27The General Practice Research Database (GPRD), based in the United Kingdom, is a large electronic system for primary care records. It has been used as a data source for previous studies,28 including studies on the association of inflammatory conditions with vascular disease29 and on the epidemiology of polymyalgia rheumatica in the UK.30 The aim of the current study was to examine the association between polymyalgia rheumatica and vascular disease in a primary care population.  相似文献   

18.

Background:

Acute kidney injury is a serious complication of elective major surgery. Acute dialysis is used to support life in the most severe cases. We examined whether rates and outcomes of acute dialysis after elective major surgery have changed over time.

Methods:

We used data from Ontario’s universal health care databases to study all consecutive patients who had elective major surgery at 118 hospitals between 1995 and 2009. Our primary outcomes were acute dialysis within 14 days of surgery, death within 90 days of surgery and chronic dialysis for patients who did not recover kidney function.

Results:

A total of 552 672 patients underwent elective major surgery during the study period, 2231 of whom received acute dialysis. The incidence of acute dialysis increased steadily from 0.2% in 1995 (95% confidence interval [CI] 0.15–0.2) to 0.6% in 2009 (95% CI 0.6–0.7). This increase was primarily in cardiac and vascular surgeries. Among patients who received acute dialysis, 937 died within 90 days of surgery (42.0%, 95% CI 40.0–44.1), with no change in 90-day survival over time. Among the 1294 patients who received acute dialysis and survived beyond 90 days, 352 required chronic dialysis (27.2%, 95% CI 24.8–29.7), with no change over time.

Interpretation:

The use of acute dialysis after cardiac and vascular surgery has increased substantially since 1995. Studies focusing on interventions to better prevent and treat perioperative acute kidney injury are needed.More than 230 million elective major surgeries are done annually worldwide.1 Acute kidney injury is a serious complication of major surgery. It represents a sudden loss of kidney function that affects morbidity, mortality and health care costs.2 Dialysis is used for the most severe forms of acute kidney injury. In the nonsurgical setting, the incidence of acute dialysis has steadily increased over the last 15 years, and patients are now more likely to survive to discharge from hospital.35 Similarly, in the surgical setting, the incidence of acute dialysis appears to be increasing over time,610 with declining inhospital mortality.8,10,11Although previous studies have improved our understanding of the epidemiology of acute dialysis in the surgical setting, several questions remain. Many previous studies were conducted at a single centre, thereby limiting their generalizability.6,1214 Most multicentre studies were conducted in the nonsurgical setting and used diagnostic codes for acute kidney injury not requiring dialysis; however, these codes can be inaccurate.15,16 In contrast, a procedure such as dialysis is easily determined. The incidence of acute dialysis after elective surgery is of particular interest given the need for surgical consent, the severe nature of the event and the potential for mitigation. The need for chronic dialysis among patients who do not recover renal function after surgery has been poorly studied, yet this condition has a major affect on patient survival and quality of life.17 For these reasons, we studied secular trends in acute dialysis after elective major surgery, focusing on incidence, 90-day mortality and need for chronic dialysis.  相似文献   

19.

Background

Cryotherapy is widely used for the treatment of cutaneous warts in primary care. However, evidence favours salicylic acid application. We compared the effectiveness of these treatments as well as a wait-and-see approach.

Methods

Consecutive patients with new cutaneous warts were recruited in 30 primary care practices in the Netherlands between May 1, 2006, and Jan. 26, 2007. We randomly allocated eligible patients to one of three groups: cryotherapy with liquid nitrogen every two weeks, self-application of salicylic acid daily or a wait-and-see approach. The primary outcome was the proportion of participants whose warts were all cured at 13 weeks. Analysis was on an intention-to-treat basis. Secondary outcomes included treatment adherence, side effects and treatment satisfaction. Research nurses assessed outcomes during home visits at 4, 13 and 26 weeks.

Results

Of the 250 participants (age 4 to 79 years), 240 were included in the analysis at 13 weeks (loss to follow-up 4%). Cure rates were 39% (95% confidence interval [CI] 29%–51%) in the cryotherapy group, 24% (95% CI 16%–35%) in the salicylic acid group and 16% (95% CI 9.5%–25%) in the wait-and-see group. Differences in effectiveness were most pronounced among participants with common warts (n = 116): cure rates were 49% (95% CI 34%–64%) in the cryotherapy group, 15% (95% CI 7%–30%) in the salicylic acid group and 8% (95% CI 3%–21%) in the wait-and-see group. Cure rates among the participants with plantar warts (n = 124) did not differ significantly between treatment groups.

Interpretation

For common warts, cryotherapy was the most effective therapy in primary care. For plantar warts, we found no clinically relevant difference in effectiveness between cryotherapy, topical application of salicylic acid or a wait-and-see approach after 13 weeks. (ClinicalTrial.gov registration no. ISRCTN42730629)Cutaneous warts are common.13 Up to one-third of primary school children have warts, of which two-thirds resolve within two years.4,5 Because warts frequently result in discomfort,6 2% of the general population and 6% of school-aged children each year present with warts to their family physician.7,8 The usual treatment is cryotherapy with liquid nitrogen or, less frequently, topical application of salicylic acid.912 Some physicians choose a wait-and-see approach because of the benign natural course of warts and the risk of side effects of treatment.10,11A recent Cochrane review on treatments of cutaneous warts concluded that available studies were small, poorly designed or limited to dermatology outpatients.10,11 Evidence on cryotherapy was contradictory,1318 whereas the evidence on salicylic acid was more convincing.1923 However, studies that compared cryotherapy and salicylic acid directly showed no differences in effectiveness.24,25 The Cochrane review called for high-quality trials in primary care to compare the effects of cryotherapy, salicylic acid and placebo.We conducted a three-arm randomized controlled trial to compare the effectiveness of cryotherapy with liquid nitrogen, topical application of salicylic acid and a wait-and-see approach for the treatment of common and plantar warts in primary care.  相似文献   

20.

Background

Observational studies and randomized controlled trials have yielded inconsistent findings about the association between the use of acid-suppressive drugs and the risk of pneumonia. We performed a systematic review and meta-analysis to summarize this association.

Methods

We searched three electronic databases (MEDLINE [PubMed], Embase and the Cochrane Library) from inception to Aug. 28, 2009. Two evaluators independently extracted data. Because of heterogeneity, we used random-effects meta-analysis to obtain pooled estimates of effect.

Results

We identified 31 studies: five case–control studies, three cohort studies and 23 randomized controlled trials. A meta-analysis of the eight observational studies showed that the overall risk of pneumonia was higher among people using proton pump inhibitors (adjusted odds ratio [OR] 1.27, 95% confidence interval [CI] 1.11–1.46, I2 90.5%) and histamine2 receptor antagonists (adjusted OR 1.22, 95% CI 1.09–1.36, I2 0.0%). In the randomized controlled trials, use of histamine2 receptor antagonists was associated with an elevated risk of hospital-acquired pneumonia (relative risk 1.22, 95% CI 1.01–1.48, I2 30.6%).

Interpretation

Use of a proton pump inhibitor or histamine2 receptor antagonist may be associated with an increased risk of both community- and hospital-acquired pneumonia. Given these potential adverse effects, clinicians should use caution in prescribing acid-suppressive drugs for patients at risk.Recently, the medical literature has paid considerable attention to unrecognized adverse effects of commonly used medications and their potential public health impact.1 One group of medications in widespread use is acid-suppressive drugs, which represent the second leading category of medication worldwide, with sales totalling US$26.9 billion in 2005.2Over the past 40 years, the development of potent acid-suppressive drugs, including proton pump inhibitors, has led to considerable improvements in the treatment of acid-related disorders of the upper gastrointestinal tract.3 Experts have generally viewed proton pump inhibitors as safe.4 However, potential complications such as gastrointestinal neoplasia, malabsorption of nutrients and increased susceptibility to infection have caused concern.5Of special interest is the possibility that acid-suppressive drugs could increase susceptibility to respiratory infections because these drugs increase gastric pH, thus allowing bacterial colonization.6,7 Several previous studies have shown that treatment with acid-suppressive drugs might be associated with an increased risk of respiratory tract infections8 and community-acquired pneumonia in adults6,7 and children.9 However, the association between use of acid-suppressive drugs and risk of pneumonia has been inconsistent.1013Given the widespread use of proton pump inhibitors and histamine2 receptor antagonists, clarifying the potential impact of acid-suppressive therapy on the risk of pneumonia is of great importance to public health.14 Previous meta-analyses have focused on the role of acid-suppressive drugs in preventing stress ulcer,11,13,15 but none have examined pneumonia as the primary outcome.The aim of this study was to summarize the association between the use of acid-suppressive drugs and the risk of pneumonia in observational studies and randomized controlled trials.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号