首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.

Background

Chronic neuropathic pain affects 1%–2% of the adult population and is often refractory to standard pharmacologic treatment. Patients with chronic pain have reported using smoked cannabis to relieve pain, improve sleep and improve mood.

Methods

Adults with post-traumatic or postsurgical neuropathic pain were randomly assigned to receive cannabis at four potencies (0%, 2.5%, 6% and 9.4% tetrahydrocannabinol) over four 14-day periods in a crossover trial. Participants inhaled a single 25-mg dose through a pipe three times daily for the first five days in each cycle, followed by a nine-day washout period. Daily average pain intensity was measured using an 11-point numeric rating scale. We recorded effects on mood, sleep and quality of life, as well as adverse events.

Results

We recruited 23 participants (mean age 45.4 [standard deviation 12.3] years, 12 women [52%]), of whom 21 completed the trial. The average daily pain intensity, measured on the 11-point numeric rating scale, was lower on the prespecified primary contrast of 9.4% v. 0% tetrahydrocannabinol (5.4 v. 6.1, respectively; difference = 0.7, 95% confidence interval [CI] 0.02–1.4). Preparations with intermediate potency yielded intermediate but nonsignificant degrees of relief. Participants receiving 9.4% tetrahydrocannabinol reported improved ability to fall asleep (easier, p = 0.001; faster, p < 0.001; more drowsy, p = 0.003) and improved quality of sleep (less wakefulness, p = 0.01) relative to 0% tetrahydrocannabinol. We found no differences in mood or quality of life. The most common drug-related adverse events during the period when participants received 9.4% tetrahydrocannabinol were headache, dry eyes, burning sensation in areas of neuropathic pain, dizziness, numbness and cough.

Conclusion

A single inhalation of 25 mg of 9.4% tetrahydrocannabinol herbal cannabis three times daily for five days reduced the intensity of pain, improved sleep and was well tolerated. Further long-term safety and efficacy studies are indicated. (International Standard Randomised Controlled Trial Register no. ISRCTN68314063)Chronic neuropathic pain has a prevalence of 1%–2%,1 and treatment options are limited.2 Pharmacotherapy includes anticonvulsants, antidepressants, opioids and local anesthetics,3,4 but responses vary and side effects limit compliance.Cannabis sativa has been used to treat pain since the third millennium BC.5 An endogenous pain-processing system has been identified, mediated by endogenous cannabinoid ligands acting on specific cannabinoid receptors.6 These findings, coupled with anecdotal evidence of the analgesic effects of smoked cannabis,7 support a reconsideration of cannabinoid agents as analgesics.Oral cannabinoids such as tetrahydrocannabinol, cannabidiol and nabilone have, alone and in combination, shown efficacy in central8,9 and peripheral10 neuropathic pain, rheumatoid arthritis11 and fibromyalgia.12The analgesic effects of smoked cannabis remain controversial, although it is used by 10%–15% of patients with chronic noncancer pain13 and multiple sclerosis.14 Clinical trials are needed to evaluate these effects, given that the risks and benefits of inhaled cannabinoids may differ from oral agents. To date, three small clinical trials of the analgesic efficacy of smoked cannabis have been reported.1517 All studies were conducted in residential laboratories, and participants smoked multiple doses of the drug at each time point. No study adequately reported data related to adverse events.We conducted a clinical trial using a standardized single-dose delivery system to explore further the safety and efficacy of smoked cannabis in outpatients with chronic neuropathic pain.  相似文献   

2.

Background:

The chronic cerebrospinal venous insufficiency theory proposes that altered cerebral venous hemodynamics play a role in the pathophysiology of multiple sclerosis. We aimed to explore the validity of this hypothesis by assessing the diagnostic criteria for chronic cerebrospinal venous insufficiency in persons with and without multiple sclerosis.

Methods:

We compared the proportion of venous outflow abnormalities between patients with multiple sclerosis and healthy controls using extracranial Doppler ultrasonography and gadolinium-enhanced magnetic resonance venography. Interpreting radiologists were blinded to the clinical status of participants.

Results:

We enrolled 120 patients with multiple sclerosis and 60 healthy controls. High proportions of both patients (67/115 [58%]) and controls (38/60 [63%]) met 1 or more of the proposed ultrasound criteria for diagnosis of chronic cerebrospinal venous insufficiency (p = 0.6). A minority of patients (23/115 [20%]) and controls (6/60 [10%]) fulfilled 2 or more of the proposed criteria (p = 0.1). There were no differences between patients and controls in the prevalence of each individual ultrasound criterion. Similarly, there were no differences in intracranial or extracranial venous patency between groups, as measured by magnetic resonance venography.

Interpretation:

We detected no differences in the proportion of venous outflow abnormalities between patients with multiple sclerosis and healthy controls. Moreover, our study revealed significant methodologic concerns regarding the proposed diagnostic criteria for chronic cerebrospinal venous insufficiency that challenge their validity.Multiple sclerosis is an inflammatory disease of the central nervous system, believed to arise from a dysfunctional immune-mediated response in a genetically susceptible host.1 In 2009, “chronic cerebrospinal venous insufficiency” was proposed to play an etiologic role in multiple sclerosis.24 Despite an abundance of published literature on this topic,228 a causal link has not been established. Recent meta-analyses have suggested a strong association between an ultrasound-based diagnosis of chronic cerebrospinal venous insufficiency and multiple sclerosis,26,28 yet there has been significant heterogeneity across studies.26,27 A factor contributing to this heterogeneity appears to be the involvement of investigators who support endovascular procedures as a treatment for multiple sclerosis.27 Furthermore, these meta-analyses have been predicated on the assumption that valid diagnostic criteria for chronic cerebrospinal venous insufficiency exist.We aimed to explore the validity of the chronic cerebrospinal venous insufficiency theory by using extracranial ultrasonography and gadolinium-enhanced magnetic resonance venography to compare the proportion of venous outflow abnormalities between patients with multiple sclerosis and healthy individuals. Our primary hypothesis was that if chronic cerebrospinal venous insufficiency is associated with multiple sclerosis, we would detect significant evidence of venous outflow obstruction in patients relative to controls.  相似文献   

3.
4.

Background:

There have been postmarketing reports of adverse cardiovascular events associated with the use of varenicline, a widely used smoking cessation drug. We conducted a systematic review and meta-analysis of randomized controlled trials to ascertain the serious adverse cardiovascular effects of varenicline compared with placebo among tobacco users.

Methods:

We searched MEDLINE, EMBASE, the Cochrane Database of Systematic Reviews, websites of regulatory authorities and registries of clinical trials, with no date or language restrictions, through September 2010 (updated March 2011) for published and unpublished studies. We selected double-blind randomized controlled trials of at least one week’s duration involving smokers or people who used smokeless tobacco that reported on cardiovascular events (ischemia, arrhythmia, congestive heart failure, sudden death or cardiovascular-related death) as serious adverse events asociated with the use of varenicline.

Results:

We analyzed data from 14 double-blind randomized controlled trials involving 8216 participants. The trials ranged in duration from 7 to 52 weeks. Varenicline was associated with a significantly increased risk of serious adverse cardiovascular events compared with placebo (1.06% [52/4908] in varenicline group v. 0.82% [27/3308] in placebo group; Peto odds ratio [OR] 1.72, 95% confidence interval [CI] 1.09–2.71; I2 = 0%). The results of various sensitivity analyses were consistent with those of the main analysis, and a funnel plot showed no publication bias. There were too few deaths to allow meaningful comparisons of mortality.

Interpretation:

Our meta-analysis raises safety concerns about the potential for an increased risk of serious adverse cardiovascular events associated with the use of varenicline among tobacco users.Varenicline is one of the most widely used drugs for smoking cessation. It is a partial agonist at the α4–β2 nicotinic acetylcholine receptors and a full agonist at the α7 nicotinic acetylcholine receptor.1,2 The drug modulates parasympathetic output from the brainstem to the heart because of activities of the α7 receptor.3 Acute nicotine administration can induce thrombosis.4 Possible mechanisms by which varenicline may be associated with cardiovascular disease might include the action of varenicline at the α7 receptor in the brainstem or, similar to nicotine, a prothrombotic effect.24At the time of its priority safety review of varenicline in 2006, the US Food and Drug Administration (FDA) noted that “[t]he serious adverse event data suggest that varenicline may possibly increase the risk of cardiac events, both ischemic and arrhythmic, particularly over longer treatment period.”5 Subsequently, the product label was updated: “Post marketing reports of myocardial infarction and cerebrovascular accidents including ischemic and hemorrhagic events have been reported in patients taking Chantix.”6 There are published reports of cardiac arrest associated with varenicline.7Cardiovascular disease is an important cause of morbidity and mortality among tobacco users. The long-term cardiovascular benefits of smoking cessation are well established.8 Although one statistically underpowered trial reported a trend toward excess cardiovascular events associated with the use of varenicline,9 a systematic review of information on the cardiovascular effects of varenicline is unavailable to clinicians.We conducted a systematic review and meta-analysis of randomized controlled trials (RCTs) to ascertain the serious adverse cardiovascular effects of varenicline compared with placebo among tobacco users.  相似文献   

5.

Background:

Evidence from observational studies have raised the possibility that statin treatment reduces the incidence of certain bacterial infections, particularly pneumonia. We analyzed data from a randomized controlled trial of rosuvastatin to examine this hypothesis.

Methods:

We analyzed data from the randomized, double-blind, placebo-controlled JUPITER trial (Justification for the Use of Statins in Prevention: an Intervention Trial Evaluating Rosuvastatin). In this trial, 17 802 healthy participants (men 50 years and older and women 60 and older) with a low-density lipoprotein (LDL) cholesterol level below 130 mg/dL (3.4 mmol/L) and a high-sensitivity C-reactive protein level of 2.0 mg/L or greater were randomly assigned to receive either rosuvastatin or placebo. We evaluated the incidence of pneumonia on an intention-to-treat basis by reviewing reports of adverse events from the study investigators, who were unaware of the treatment assignments.

Results:

Among 17 802 trial participants followed for a median of 1.9 years, incident pneumonia was reported as an adverse event in 214 participants in the rosuvastatin group and 257 in the placebo group (hazard ratio [HR] 0.83, 95% confidence interval [CI] 0.69–1.00). In analyses restricted to events occurring before a cardiovascular event, pneumonia occurred in 203 participants given rosuvastatin and 250 given placebo (HR 0.81, 95% CI 0.67–0.97). Inclusion of recurrent pneumonia events did not modify this effect (HR 0.81, 95% CI 0.67–0.98), nor did adjustment for age, sex, smoking, metabolic syndrome, lipid levels and C-reactive protein level.

Interpretation:

Data from this randomized controlled trial support the hypothesis that statin treatment may modestly reduce the incidence of pneumonia.(ClinicalTrials.gov trial register no. NCT0023968.)Randomized trials of statin treatment have consistently shown reductions in the incidence of cardiovascular events.1 In addition to these proven vascular effects, several observational studies have raised the possibility that statins reduce the incidence and severity of certain bacterial infections,25 particularly pneumonia.69 Mechanistic support for this hypothesis is provided in part by laboratory evidence that statins, in addition to lowering low-density lipoprotein (LDL) cholesterol levels considerably, have a positive effect on inflammation, apoptosis, antioxidant balance and endothelial function.10 However, a common confounder typical of these observational studies relates to the fact that statin treatment may be a nonspecific marker of improved quality of care (healthy user effect).11,12 In addition, because infections such as pneumonia are a common complication of myocardial infarction and stroke, any beneficial effect of statin treatment on pneumonia and other infections reported in observational studies may have been due simply to a reduction in these vascular events.We reviewed data from the recently completed JUPITER trial (Justification for the Use of Statins in Prevention: an Intervention Trial Evaluating Rosuvastatin), a randomized controlled trial involving more than 17 000 men and women randomly assigned to receive either rosuvastatin or placebo, to examine the possibility that statins may reduce the incidence of pneumonia.  相似文献   

6.

Background:

Previous studies have suggested that the immunochemical fecal occult blood test has superior specificity for detecting bleeding in the lower gastrointestinal tract even if bleeding occurs in the upper tract. We conducted a large population-based study involving asymptomatic adults in Taiwan, a population with prevalent upper gastrointestinal lesions, to confirm this claim.

Methods:

We conducted a prospective cohort study involving asymptomatic people aged 18 years or more in Taiwan recruited to undergo an immunochemical fecal occult blood test, colonoscopy and esophagogastroduodenoscopy between August 2007 and July 2009. We compared the prevalence of lesions in the lower and upper gastrointestinal tracts between patients with positive and negative fecal test results. We also identified risk factors associated with a false-positive fecal test result.

Results:

Of the 2796 participants, 397 (14.2%) had a positive fecal test result. The sensitivity of the test for predicting lesions in the lower gastrointestinal tract was 24.3%, the specificity 89.0%, the positive predictive value 41.3%, the negative predictive value 78.7%, the positive likelihood ratio 2.22, the negative likelihood ratio 0.85 and the accuracy 73.4%. The prevalence of lesions in the lower gastrointestinal tract was higher among those with a positive fecal test result than among those with a negative result (41.3% v. 21.3%, p < 0.001). The prevalence of lesions in the upper gastrointestinal tract did not differ significantly between the two groups (20.7% v. 17.5%, p = 0.12). Almost all of the participants found to have colon cancer (27/28, 96.4%) had a positive fecal test result; in contrast, none of the three found to have esophageal or gastric cancer had a positive fecal test result (p < 0.001). Among those with a negative finding on colonoscopy, the risk factors associated with a false-positive fecal test result were use of antiplatelet drugs (adjusted odds ratio [OR] 2.46, 95% confidence interval [CI] 1.21–4.98) and a low hemoglobin concentration (adjusted OR 2.65, 95% CI 1.62–4.33).

Interpretation:

The immunochemical fecal occult blood test was specific for predicting lesions in the lower gastrointestinal tract. However, the test did not adequately predict lesions in the upper gastrointestinal tract.The fecal occult blood test is a convenient tool to screen for asymptomatic gastrointestinal bleeding.1 When the test result is positive, colonoscopy is the strategy of choice to investigate the source of bleeding.2,3 However, 13%–42% of patients can have a positive test result but a negative colonoscopy,4 and it has not yet been determined whether asymptomatic patients should then undergo evaluation of the upper gastrointestinal tract.Previous studies showed that the frequency of lesions in the upper gastrointestinal tract was comparable or even higher than that of colonic lesions59 and that the use of esophagogastroduodenoscopy may change clinical management.10,11 Some studies showed that evaluation of the upper gastrointestinal tract helped to identify important lesions in symptomatic patients and those with iron deficiency anemia;12,13 however, others concluded that esophagogastroduodenoscopy was unjustified because important findings in the upper gastrointestinal tract were rare1417 and sometimes irrelevant to the results of fecal occult blood testing.1821 This controversy is related to the heterogeneity of study populations and to the limitations of the formerly used guaiac-based fecal occult blood test,520 which was not able to distinguish bleeding in the lower gastrointestinal tract from that originating in the upper tract.The guaiac-based fecal occult blood test is increasingly being replaced by the immunochemical-based test. The latter is recommended for detecting bleeding in the lower gastrointestinal tract because it reacts with human globin, a protein that is digested by enzymes in the upper gastrointestinal tract.22 With this advantage, the occurrence of a positive fecal test result and a negative finding on colonoscopy is expected to decrease.We conducted a population-based study in Taiwan to verify the performance of the immunochemical fecal occult blood test in predicting lesions in the lower gastrointestinal tract and to confirm that results are not confounded by the presence of lesions in the upper tract. In Taiwan, the incidence of colorectal cancer is rapidly increasing, and Helicobacter pylori-related lesions in the upper gastrointestinal tract remain highly prevalent.23 Same-day bidirectional endoscopies are therefore commonly used for cancer screening.24 This screening strategy provides an opportunity to evaluate the performance of the immunochemical fecal occult blood test.  相似文献   

7.

Background:

Some children feel pain during wound closures using tissue adhesives. We sought to determine whether a topically applied analgesic solution of lidocaine–epinephrine–tetracaine would decrease pain during tissue adhesive repair.

Methods:

We conducted a randomized, placebo-controlled, blinded trial involving 221 children between the ages of 3 months and 17 years. Patients were enrolled between March 2011 and January 2012 when presenting to a tertiary-care pediatric emergency department with lacerations requiring closure with tissue adhesive. Patients received either lidocaine–epinephrine–tetracaine or placebo before undergoing wound closure. Our primary outcome was the pain rating of adhesive application according to the colour Visual Analogue Scale and the Faces Pain Scale — Revised. Our secondary outcomes were physician ratings of difficulty of wound closure and wound hemostasis, in addition to their prediction as to which treatment the patient had received.

Results:

Children who received the analgesic before wound closure reported less pain (median 0.5, interquartile range [IQR] 0.25–1.50) than those who received placebo (median 1.00, IQR 0.38–2.50) as rated using the colour Visual Analogue Scale (p = 0.01) and Faces Pain Scale – Revised (median 0.00, IQR 0.00–2.00, for analgesic v. median 2.00, IQR 0.00–4.00, for placebo, p < 0.01). Patients who received the analgesic were significantly more likely to report having or to appear to have a pain-free procedure (relative risk [RR] of pain 0.54, 95% confidence interval [CI] 0.37–0.80). Complete hemostasis of the wound was also more common among patients who received lidocaine–epinephrine–tetracaine than among those who received placebo (78.2% v. 59.3%, p = 0.008).

Interpretation:

Treating minor lacerations with lidocaine–epinephrine–tetracaine before wound closure with tissue adhesive reduced ratings of pain and increased the proportion of pain-free repairs among children aged 3 months to 17 years. This low-risk intervention may benefit children with lacerations requiring tissue adhesives instead of sutures. Trial registration: ClinicalTrials.gov, no. PR 6138378804.Minor laceration repair with tissue adhesive, or “skin glue,” is common in pediatrics. Although less painful than cutaneous sutures,1 tissue adhesives polymerize through an exothermic reaction that may cause a burning, painful sensation. Pain is dependent on the specific formulation of the adhesive used and the method of application. One study of different tissue adhesives reported 23.8%–40.5% of participants feeling a “burning sensation”,2 whereas another study reported “pain” in 17.6%–44.1% of children.3 The amounts of adhesive applied, method of application and individual patient characteristics can also influence the feeling of pain.3,4 Because tissue adhesives polymerize on contact with moisture,4,5 poor wound hemostasis has the potential to cause premature setting of the adhesive, leading to less efficient and more painful repairs.6Preventing procedural pain is a high priority in pediatric care.7 Inadequate analgesia for pediatric procedures may result in more complicated procedures, increased pain sensitivity with future procedures8 and increased fear and anxiety of medical experiences persisting into adulthood.9 A practical method to prevent pain during laceration repairs with tissue adhesive would have a substantial benefit for children.A topically applied analgesic solution containing lidocaine–epinephrine–tetracaine with vasoconstrictive properties provides safe and effective pain control during wound repair using sutures.10 A survey of pediatric emergency fellowship directors in the United States reported that 76% of respondents use this solution or a similar solution when suturing 3-cm chin lacerations in toddlers.11 However, in a hospital chart review, this solution was used in less than half of tissue adhesive repairs, the remainder receiving either local injection of anesthetic or no pain control.12 Reluctance to use lidocaine–epinephrine–tetracaine with tissue adhesive may be due to the perception that it is not worth the minimum 20-minute wait required for the analgesic to take effect13 or to a lack of awareness that tissue adhesives can cause pain.We sought to investigate whether preapplying lidocaine–epinephrine–tetracaine would decrease pain in children during minor laceration repair using tissue adhesive.  相似文献   

8.
Laupacis A  Lillie E  Dueck A  Straus S  Perrier L  Burton JM  Aviv R  Thorpe K  Feasby T  Spears J 《CMAJ》2011,183(16):E1203-E1212

Background:

It has been proposed by Zamboni and colleagues that multiple sclerosis is caused by chronic cerebrospinal venous insufficiency, a term used to describe ultrasound-detectable abnormalities in the anatomy and flow of intra- and extracerebral veins. We conducted a meta-analysis of studies that reported the frequency of chronic cerebrospinal venous insufficiency among patients with and those without multiple sclerosis.

Methods:

We searched MEDLINE and EMBASE as well as bibliographies of relevant articles for eligible studies. We included studies if they used ultrasound to diagnose chronic cerebrospinal venous insufficiency and compared the frequency of the venous abnormalities among patients with and those without multiple sclerosis.

Results:

We identified eight eligible studies: all included healthy controls, and four of them also included a control group of patients with neurologic diseases other than multiple sclerosis. Chronic cerebrospinal venous insufficiency was more frequent among patients with multiple sclerosis than among the healthy controls (odds ratio [OR] 13.5, 95% confidence interval [CI] 2.6–71.4), but there was extensive unexplained heterogeneity among the studies. The association remained significant in the most conservative sensitivity analysis (OR 3.7, 95% CI 1.2–11.0), in which we removed the initial study by Zamboni and colleagues and added a study that did not find chronic cerebrospinal venous insufficiency in any patient. Although chronic cerebrospinal venous insufficiency was also more frequent among patients with multiple sclerosis than among controls with other neurologic diseases (OR 32.5, 95% CI 0.6–1775.7), the association was not statistically significant, the 95% CI was wide, and the OR was less extreme after removal of the study by Zamboni and colleagues (OR 3.5, 95% 0.8–15.8).

Interpretation:

Our findings showed a positive association between chronic cerebrospinal venous insufficiency and multiple sclerosis. However, poor reporting of the success of blinding and marked heterogeneity among the studies included in our review precluded definitive conclusions.Multiple sclerosis is a chronic demyelinating and degenerative disease of the central nervous system. The exact cause remains unknown, but most evidence favours an autoimmune mechanism.1 In 2006, Zamboni and colleagues proposed that multiple sclerosis is caused by abnormalities in the direction and pathway of cerebral venous flow, leading to deposition of iron in the brain, which triggers an autoimmune reaction.2 They reported that patients with multiple sclerosis had a higher frequency of abnormalities of anatomy and flow in the internal jugular, deep cerebral, vertebral and azygous veins than individuals without multiple sclerosis had.3,4 They called this condition chronic cerebrospinal venous insufficiency. They further described detection of this condition by means of transcranial and extracranial Doppler ultrasonography. This method of detection requires the evaluation of five ultrasound parameters that assess both venous blood flow and anatomy.3,5 Chronic cerebrospinal venous insufficiency is diagnosed if a patient has an abnormality in two or more of the five parameters.There is controversy about the frequency and role of chronic cerebrospinal venous insufficiency in patients with multiple sclerosis6,7 and whether the frequency differs between patients with and those without multiple sclerosis. We performed a systematic review and meta-analysis of all peer-reviewed reports of studies that compared the frequency of chronic cerebrospinal venous insufficiency among patients with and those without multiple sclerosis.  相似文献   

9.

Background:

The gut microbiota is essential to human health throughout life, yet the acquisition and development of this microbial community during infancy remains poorly understood. Meanwhile, there is increasing concern over rising rates of cesarean delivery and insufficient exclusive breastfeeding of infants in developed countries. In this article, we characterize the gut microbiota of healthy Canadian infants and describe the influence of cesarean delivery and formula feeding.

Methods:

We included a subset of 24 term infants from the Canadian Healthy Infant Longitudinal Development (CHILD) birth cohort. Mode of delivery was obtained from medical records, and mothers were asked to report on infant diet and medication use. Fecal samples were collected at 4 months of age, and we characterized the microbiota composition using high-throughput DNA sequencing.

Results:

We observed high variability in the profiles of fecal microbiota among the infants. The profiles were generally dominated by Actinobacteria (mainly the genus Bifidobacterium) and Firmicutes (with diverse representation from numerous genera). Compared with breastfed infants, formula-fed infants had increased richness of species, with overrepresentation of Clostridium difficile. Escherichia–Shigella and Bacteroides species were underrepresented in infants born by cesarean delivery. Infants born by elective cesarean delivery had particularly low bacterial richness and diversity.

Interpretation:

These findings advance our understanding of the gut microbiota in healthy infants. They also provide new evidence for the effects of delivery mode and infant diet as determinants of this essential microbial community in early life.The human body harbours trillions of microbes, known collectively as the “human microbiome.” By far the highest density of commensal bacteria is found in the digestive tract, where resident microbes outnumber host cells by at least 10 to 1. Gut bacteria play a fundamental role in human health by promoting intestinal homeostasis, stimulating development of the immune system, providing protection against pathogens, and contributing to the processing of nutrients and harvesting of energy.1,2 The disruption of the gut microbiota has been linked to an increasing number of diseases, including inflammatory bowel disease, necrotizing enterocolitis, diabetes, obesity, cancer, allergies and asthma.1 Despite this evidence and a growing appreciation for the integral role of the gut microbiota in lifelong health, relatively little is known about the acquisition and development of this complex microbial community during infancy.3Two of the best-studied determinants of the gut microbiota during infancy are mode of delivery and exposure to breast milk.4,5 Cesarean delivery perturbs normal colonization of the infant gut by preventing exposure to maternal microbes, whereas breastfeeding promotes a “healthy” gut microbiota by providing selective metabolic substrates for beneficial bacteria.3,5 Despite recommendations from the World Health Organization,6 the rate of cesarean delivery has continued to rise in developed countries and rates of breastfeeding decrease substantially within the first few months of life.7,8 In Canada, more than 1 in 4 newborns are born by cesarean delivery, and less than 15% of infants are exclusively breastfed for the recommended duration of 6 months.9,10 In some parts of the world, elective cesarean deliveries are performed by maternal request, often because of apprehension about pain during childbirth, and sometimes for patient–physician convenience.11The potential long-term consequences of decisions regarding mode of delivery and infant diet are not to be underestimated. Infants born by cesarean delivery are at increased risk of asthma, obesity and type 1 diabetes,12 whereas breastfeeding is variably protective against these and other disorders.13 These long-term health consequences may be partially attributable to disruption of the gut microbiota.12,14Historically, the gut microbiota has been studied with the use of culture-based methodologies to examine individual organisms. However, up to 80% of intestinal microbes cannot be grown in culture.3,15 New technology using culture-independent DNA sequencing enables comprehensive detection of intestinal microbes and permits simultaneous characterization of entire microbial communities. Multinational consortia have been established to characterize the “normal” adult microbiome using these exciting new methods;16 however, these methods have been underused in infant studies. Because early colonization may have long-lasting effects on health, infant studies are vital.3,4 Among the few studies of infant gut microbiota using DNA sequencing, most were conducted in restricted populations, such as infants delivered vaginally,17 infants born by cesarean delivery who were formula-fed18 or preterm infants with necrotizing enterocolitis.19Thus, the gut microbiota is essential to human health, yet the acquisition and development of this microbial community during infancy remains poorly understood.3 In the current study, we address this gap in knowledge using new sequencing technology and detailed exposure assessments20 of healthy Canadian infants selected from a national birth cohort to provide representative, comprehensive profiles of gut microbiota according to mode of delivery and infant diet.  相似文献   

10.

Background:

Results of randomized controlled trials evaluating zinc for the treatment of the common cold are conflicting. We conducted a systematic review and meta-analysis to evaluate the efficacy and safety of zinc for such use.

Methods:

We searched electronic databases and other sources for studies published through to Sept. 30, 2011. We included all randomized controlled trials comparing orally administered zinc with placebo or no treatment. Assessment for study inclusion, data extraction and risk-of-bias analyses were performed in duplicate. We conducted meta-analyses using a random-effects model.

Results:

We included 17 trials involving a total of 2121 participants. Compared with patients given placebo, those receiving zinc had a shorter duration of cold symptoms (mean difference −1.65 days, 95% confidence interval [CI] −2.50 to −0.81); however, heterogeneity was high (I2 = 95%). Zinc shortened the duration of cold symptoms in adults (mean difference −2.63, 95% CI −3.69 to −1.58), but no significant effect was seen among children (mean difference −0.26, 95% CI −0.78 to 0.25). Heterogeneity remained high in all subgroup analyses, including by age, dose of ionized zinc and zinc formulation. The occurrence of any adverse event (risk ratio [RR] 1.24, 95% CI 1.05 to 1.46), bad taste (RR 1.65, 95% CI 1.27 to 2.16) and nausea (RR 1.64, 95% CI 1.19 to 2.27) were more common in the zinc group than in the placebo group.

Interpretation:

The results of our meta-analysis showed that oral zinc formulations may shorten the duration of symptoms of the common cold. However, large high-quality trials are needed before definitive recommendations for clinical practice can be made. Adverse effects were common and should be the point of future study, because a good safety and tolerance profile is essential when treating this generally mild illness.The common cold is a frequent respiratory infection experienced 2 to 4 times a year by adults and up to 8 to 10 times a year by children.13 Colds can be caused by several viruses, of which rhinoviruses are the most common.4 Despite their benign nature, colds can lead to substantial morbidity, absenteeism and lost productivity.57Zinc, which can inhibit rhinovirus replication and has activity against other respiratory viruses such as respiratory syncytial virus,8 is a potential treatment for the common cold. The exact mechanism of zinc’s activity on viruses remains uncertain. Zinc may also reduce the severity of cold symptoms by acting as an astringent on the trigeminal nerve.9,10A recent meta-analysis of randomized controlled trials concluded that zinc was effective at reducing the duration and severity of common cold symptoms.11 However, there was considerable heterogeneity reported for the primary outcome (I2 = 93%), and subgroup analyses to explore between-study variations were not performed. The efficacy of zinc therefore remains uncertain, because it is unknown whether the variability among studies was due to methodologic diversity (i.e., risk of bias and therefore uncertainty in zinc’s efficacy) or differences in study populations or interventions (i.e., zinc dose and formulation).We conducted a systematic review and meta-analysis to evaluate the efficacy and safety of zinc for the treatment of the common cold. We sought to improve upon previous systematic reviews1117 by exploring the heterogeneity with subgroups identified a priori, identifying new trials by instituting a broader search and obtaining additional data from authors.  相似文献   

11.

Background:

Screening for methicillin-resistant Staphylococcus aureus (MRSA) is intended to reduce nosocomial spread by identifying patients colonized by MRSA. Given the widespread use of this screening, we evaluated its potential clinical utility in predicting the resistance of clinical isolates of S. aureus.

Methods:

We conducted a 2-year retrospective cohort study that included patients with documented clinical infection with S. aureus and prior screening for MRSA. We determined test characteristics, including sensitivity and specificity, of screening for predicting the resistance of subsequent S. aureus isolates.

Results:

Of 510 patients included in the study, 53 (10%) had positive results from MRSA screening, and 79 (15%) of infecting isolates were resistant to methicillin. Screening for MRSA predicted methicillin resistance of the infecting isolate with 99% (95% confidence interval [CI] 98%–100%) specificity and 63% (95% CI 52%–74%) sensitivity. When screening swabs were obtained within 48 hours before isolate collection, sensitivity increased to 91% (95% CI 71%–99%) and specificity was 100% (95% CI 97%–100%), yielding a negative likelihood ratio of 0.09 (95% CI 0.01–0.3) and a negative predictive value of 98% (95% CI 95%–100%). The time between swab and isolate collection was a significant predictor of concordance of methicillin resistance in swabs and isolates (odds ratio 6.6, 95% CI 1.6–28.2).

Interpretation:

A positive result from MRSA screening predicted methicillin resistance in a culture-positive clinical infection with S. aureus. Negative results on MRSA screening were most useful for excluding methicillin resistance of a subsequent infection with S. aureus when the screening swab was obtained within 48 hours before collection of the clinical isolate.Antimicrobial resistance is a global problem. The prevalence of resistant bacteria, including methicillin-resistant Staphylococcus aureus (MRSA), has reached high levels in many countries.13 Methicillin resistance in S. aureus is associated with excess mortality, hospital stays and health care costs,3,4 possibly owing to increased virulence or less effective treatments for MRSA compared with methicillin-sensitive S. aureus (MSSA).5The initial selection of appropriate empirical antibiotic treatment affects mortality, morbidity and potential health care expenditures.68 The optimal choice of antibiotics in S. aureus infections is important for 3 major reasons: β-lactam antibiotics have shown improved efficacy over vancomycin and are the ideal treatment for susceptible strains of S. aureus;6 β-lactam antibiotics are ineffective against MRSA, and so vancomycin or other newer agents must be used empirically when MRSA is suspected; and unnecessary use of broad-spectrum antibiotics (e.g., vancomycin) can lead to the development of further antimicrobial resistance.9 It is therefore necessary to make informed decisions regarding selection of empirical antibiotics.1013 Consideration of a patient’s previous colonization status is important, because colonization predates most hospital and community-acquired infections.10,14Universal or targeted surveillance for MRSA has been implemented widely as a means of limiting transmission of this antibiotic-resistant pathogen.15,16 Although results of MRSA screening are not intended to guide empirical treatment, they may offer an additional benefit among patients in whom clinical infection with S. aureus develops.Studies that examined the effects of MRSA carriage on the subsequent likelihood of infection allude to the potential diagnostic benefit of prior screening for MRSA.17,18 Colonization by MRSA at the time of hospital admission is associated with a 13-fold increased risk of subsequent MRSA infection.17,18 Moreover, studies that examined nasal carriage of S. aureus after documented S. aureus bacteremia have shown remarkable concordance between the genotypes of paired colonizing and invasive strains (82%–94%).19,20 The purpose of our study was to identify the usefulness of prior screening for MRSA for predicting methicillin resistance in culture-positive S. aureus infections.  相似文献   

12.
Schultz AS  Finegan B  Nykiforuk CI  Kvern MA 《CMAJ》2011,183(18):E1334-E1344

Background:

Many hospitals have adopted smoke-free policies on their property. We examined the consequences of such polices at two Canadian tertiary acute-care hospitals.

Methods:

We conducted a qualitative study using ethnographic techniques over a six-month period. Participants (n = 186) shared their perspectives on and experiences with tobacco dependence and managing the use of tobacco, as well as their impressions of the smoke-free policy. We interviewed inpatients individually from eight wards (n = 82), key policy-makers (n = 9) and support staff (n = 14) and held 16 focus groups with health care providers and ward staff (n = 81). We also reviewed ward documents relating to tobacco dependence and looked at smoking-related activities on hospital property.

Results:

Noncompliance with the policy and exposure to secondhand smoke were ongoing concerns. Peoples’ impressions of the use of tobacco varied, including divergent opinions as to whether such use was a bad habit or an addiction. Treatment for tobacco dependence and the management of symptoms of withdrawal were offered inconsistently. Participants voiced concerns over patient safety and leaving the ward to smoke.

Interpretation:

Policies mandating smoke-free hospital property have important consequences beyond noncompliance, including concerns over patient safety and disruptions to care. Without adequately available and accessible support for withdrawal from tobacco, patients will continue to face personal risk when they leave hospital property to smoke.Canadian cities and provinces have passed smoking bans with the goal of reducing people’s exposure to secondhand smoke in workplaces, public spaces and on the property adjacent to public buildings.1,2 In response, Canadian health authorities and hospitals began implementing policies mandating smoke-free hospital property, with the goals of reducing the exposure of workers, patients and visitors to tobacco smoke while delivering a public health message about the dangers of smoking.25 An additional anticipated outcome was the reduced use of tobacco among patients and staff. The impetuses for adopting smoke-free policies include public support for such legislation and the potential for litigation for exposure to second-hand smoke.2,4Tobacco use is a modifiable risk factor associated with a variety of cancers, cardiovascular diseases and respiratory conditions.611 Patients in hospital who use tobacco tend to have more surgical complications and exacerbations of acute and chronic health conditions than patients who do not use tobacco.611 Any policy aimed at reducing exposure to tobacco in hospitals is well supported by evidence, as is the integration of interventions targetting tobacco dependence.12 Unfortunately, most of the nearly five million Canadians who smoke will receive suboptimal treatment,13 as the routine provision of interventions for tobacco dependence in hospital settings is not a practice norm.1416 In smoke-free hospitals, two studies suggest minimal support is offered for withdrawal, 17,18 and one reports an increased use of nicotine-replacement therapy after the implementation of the smoke-free policy.19Assessments of the effectiveness of smoke-free policies for hospital property tend to focus on noncompliance and related issues of enforcement.17,20,21 Although evidence of noncompliance and litter on hospital property2,17,20 implies ongoing exposure to tobacco smoke, half of the participating hospital sites in one study reported less exposure to tobacco smoke within hospital buildings and on the property.18 In addition, there is evidence to suggest some decline in smoking among staff.18,19,21,22We sought to determine the consequences of policies mandating smoke-free hospital property in two Canadian acute-care hospitals by eliciting lived experiences of the people faced with enacting the policies: patients and health care providers. In addition, we elicited stories from hospital support staff and administrators regarding the policies.  相似文献   

13.

Background:

The true benefit of iron supplementation for nonanemic menstruating women with fatigue is unknown. We studied the effect of oral iron therapy on fatigue and quality of life, as well as on hemoglobin, ferritin and soluble transferrin receptor levels, in nonanemic iron-deficient women with unexplained fatigue.

Methods:

We performed a multicentre, parallel, randomized controlled, closed-label, observer-blinded trial. We recruited from the practices of 44 primary care physicians in France from March to July 2006. We randomly assigned 198 women aged 18–53 years who complained of fatigue and who had a ferritin level of less than 50 ug/L and hemoglobin greater than 12.0 g/dL to receive either oral ferrous sulfate (80 mg of elemental iron daily; n = 102) or placebo (n = 96) for 12 weeks. The primary outcome was fatigue as measured on the Current and Past Psychological Scale. Biological markers were measured at 6 and 12 weeks.

Results:

The mean score on the Current and Past Psychological Scale for fatigue decreased by 47.7% in the iron group and by 28.8% in the placebo group (difference –18.9%, 95% CI −34.5 to −3.2; p = 0.02), but there were no significant effects on quality of life (p = 0.2), depression (p = 0.97) or anxiety (p = 0.5). Compared with placebo, iron supplementation increased hemoglobin (0.32 g/dL; p = 0.002) and ferritin (11.4 μg/L; p < 0.001) and decreased soluble transferrin receptor (−0.54 mg/L; p < 0.001) at 12 weeks.

Interpretation:

Iron supplementation should be considered for women with unexplained fatigue who have ferritin levels below 50 μg/L. We suggest assessing the efficiency using blood markers after six weeks of treatment. Trial registration no. EudraCT 2006–000478–56.The prevalence of fatigue ranges from 14% to 27% among patients in primary care.1 In addition, 1%–2% of visits to general practices are because of fatigue, and women are three times more likely than men to mention fatigue.1 Unexplained fatigue can be caused by iron deficiency.2 Verdon and coauthors found an improvement in fatigue following iron supplementation in nonanemic women with unexplained fatigue.3 However, the hemoglobin levels of these patients were not available, which may have contributed to the ongoing debate about the appropriateness of reference limits defining anemia in women.4,5 Thus, the effectiveness of iron supplementation in nonanemic menstruating women with major fatigue without an obvious clinical cause is unknown.6 Our main objective was to test the hypothesis that oral iron therapy for a short period may improve fatigue, hemoglobin, iron stores and quality of life in menstruating nonanemic women whose ferritin levels are below 50 μg/L. Our secondary objective was to evaluate whether this effect is dependent on the initial levels of hemoglobin, ferritin or transferrin saturation.  相似文献   

14.
15.

Background:

Although injection drug use is known to result in a range of health-related harms, including transmission of HIV and fatal overdose, little is known about the possible role of synthetic drugs in injection initiation. We sought to determine the effect of crystal methamphetamine use on risk of injection initiation among street-involved youth in a Canadian setting.

Methods:

We used Cox regression analyses to identify predictors of injection initiation among injection-naive street-involved youth enrolled in the At-Risk Youth Study, a prospective cohort study of street-involved youth in Vancouver, British Columbia. Data on circumstances of first injection were also obtained.

Results:

Between October 2005 and November 2010, a total of 395 drug injection–naive, street-involved youth provided 1434 observations, with 64 (16.2%) participants initiating injection drug use during the follow-up period, for a cumulative incidence of 21.7 (95% confidence interval [CI] 1.7–41.7) per 100 person-years. In multivariable analysis, recent noninjection use of crystal methamphetamine was positively associated with subsequent injection initiation (adjusted hazard ratio 1.93, 95% CI 1.31–2.85). The drug of first injection was most commonly reported as crystal methamphetamine (14/31 [45%]).

Interpretation:

Noninjection use of crystal methamphetamine predicted subsequent injection initiation, and crystal methamphetamine was the most commonly used drug at the time of first injection. Evidence-based strategies to prevent transition to injection drug use among crystal methamphetamine users are urgently needed.Street-involved youth are at high risk of initiating injection drug use.1 This situation is of concern, given that injection drug use has been associated with increased risk of transmission of HIV and hepatitis C virus2,3 and fatal overdose,4 as well as a range of other serious negative health and social outcomes. Newly initiated injection drug users have also been identified as a subpopulation at particularly high risk of injection-related harm.58 Unfortunately, despite recent calls to prioritize interventions to prevent the initiation of injection drug use,9 there are few evidence-based strategies to prevent injection initiation among street-involved youth.This situation relates, in part, to the fact that little is known about the risk factors for injection initiation within this population. Prospective research from Montréal, Quebec, has alluded to the role that crack and powder cocaine may play in promoting subsequent injection initiation,10 as has retrospective research conducted among drug users in Baltimore, Maryland.11 Much less is known about the possible role that synthetic drugs, such as methamphetamine, may play in contributing to an increased risk of injection initiation.12 Globally, amphetamine-type stimulants have emerged as one of the most commonly used groups of illicit drugs, second only to cannabis.13 This increase in amphetamine use is reflected in the epidemiology of drug use in some Canadian settings. For instance, in Vancouver, British Columbia, rates of injection use of crystal methamphetamine have increased significantly among adult injection drug users.14 This pattern is of substantial public health concern, given that crystal methamphetamine has been associated with a range of health and social harms, including the potential to drive high-risk drug-use patterns, including injection.15 Given these concerns and the well-established health-related harms of injection drug use, we investigated the possible role of crystal methamphetamine use in the incidence of first injection drug use within a cohort of street-involved youth in a Canadian setting.  相似文献   

16.

Background:

Adequate control of blood pressure reduces the risk of recurrent stroke. We conducted a randomized controlled study to determine whether home blood pressure monitoring with nurse-led telephone support would reduce blood pressure in patients with hypertension and a history of stroke.

Methods:

We recruited 381 participants (mean age 72 years) from outpatient and inpatient stroke clinics between Mar. 1, 2007, and Aug. 31, 2009. Nearly half (45%, 170) of the participants had some disability due to stroke. Participants were visited at home for a baseline assessment and randomly allocated to home blood pressure monitoring (n = 187) or usual care (n = 194). Those in the intervention group were given a monitor, brief training and telephone support. Participants who had home blood pressure readings consistently over target (target < 130/80 mm Hg) were advised to consult their family physician. The main outcome measure was a fall in systolic blood pressure after 12 months, measured by an independent researcher unaware of group allocation.

Results:

Despite more patients in the intervention group than in the control group having changes to antihypertensive treatment during the trial period (60.1% [98/163] v. 47.6% [78/164], p = 0.02), the fall in systolic blood pressure from baseline did not differ significantly between the groups (adjusted mean difference 0.3 mm Hg, 95% confidence interval –3.6 to 4.2 mm Hg). Subgroup analysis showed significant interaction with disability due to stroke (p = 0.03 at 6 months) and baseline blood pressure (p = 0.03 at 12 months).

Interpretation:

Overall, home monitoring did not improve blood pressure control in patients with hypertension and a history of stroke. It was associated with a fall in systolic pressure in patients who had uncontrolled blood pressure at baseline and those without disability due to stroke. Trial registration: ClinicalTrials.gov registration NCT00514800Worldwide about 15 million people have a stroke each year.1 Adequate control of blood pressure reduces the risk of recurrent stroke by up to 40%.2 However, about 1 in 3 adults have blood pressure readings above recommended targets.3 Systematic reviews suggest that home monitoring is associated with reduced blood pressure47 and may improve compliance with treatment and encourage lifestyle changes.8 But the benefits tend to be modest.4,5 Co-interventions, such as patient education and support from health professionals, are important and may lead to intensification of antihypertensive treatment.8,9Despite the importance of good control of blood pressure in patients with a history of stroke, little is known about home monitoring in this group. Physical and cognitive impairments resulting from stroke may adversely affect patients’ compliance and ability to use the equipment successfully. We conducted a randomized controlled trial to determine whether home blood pressure monitoring with nurse-led telephone support was associated with reduced systolic blood pressure after 12 months in patients with hypertension and a history of stroke. Because PROGRESS (the Perindopril Protection Against Recurrent Stroke Study)2 showed that antihypertensive medications reduced the risk of recurrent stroke among patients with or without hypertension who had a history of stroke, we aimed to assess home monitoring in unselected patients with hypertension and a history of stroke across the blood pressure range.  相似文献   

17.

Background:

Polymyalgia rheumatica is one of the most common inflammatory rheumatologic conditions in older adults. Other inflammatory rheumatologic disorders are associated with an excess risk of vascular disease. We investigated whether polymyalgia rheumatica is associated with an increased risk of vascular events.

Methods:

We used the General Practice Research Database to identify patients with a diagnosis of incident polymyalgia rheumatica between Jan. 1, 1987, and Dec. 31, 1999. Patients were matched by age, sex and practice with up to 5 patients without polymyalgia rheumatica. Patients were followed until their first vascular event (cardiovascular, cerebrovascular, peripheral vascular) or the end of available records (May 2011). All participants were free of vascular disease before the diagnosis of polymyalgia rheumatica (or matched date). We used Cox regression models to compare time to first vascular event in patients with and without polymyalgia rheumatica.

Results:

A total of 3249 patients with polymyalgia rheumatica and 12 735 patients without were included in the final sample. Over a median follow-up period of 7.8 (interquartile range 3.3–12.4) years, the rate of vascular events was higher among patients with polymyalgia rheumatica than among those without (36.1 v. 12.2 per 1000 person-years; adjusted hazard ratio 2.6, 95% confidence interval 2.4–2.9). The increased risk of a vascular event was similar for each vascular disease end point. The magnitude of risk was higher in early disease and in patients younger than 60 years at diagnosis.

Interpretation:

Patients with polymyalgia rheumatica have an increased risk of vascular events. This risk is greatest in the youngest age groups. As with other forms of inflammatory arthritis, patients with polymyalgia rheumatica should have their vascular risk factors identified and actively managed to reduce this excess risk.Inflammatory rheumatologic disorders such as rheumatoid arthritis,1,2 systemic lupus erythematosus,2,3 gout,4 psoriatic arthritis2,5 and ankylosing spondylitis2,6 are associated with an increased risk of vascular disease, especially cardiovascular disease, leading to substantial morbidity and premature death.26 Recognition of this excess vascular risk has led to management guidelines advocating screening for and management of vascular risk factors.79Polymyalgia rheumatica is one of the most common inflammatory rheumatologic conditions in older adults,10 with a lifetime risk of 2.4% for women and 1.7% for men.11 To date, evidence regarding the risk of vascular disease in patients with polymyalgia rheumatica is unclear. There are a number of biologically plausible mechanisms between polymyalgia rheumatica and vascular disease. These include the inflammatory burden of the disease,12,13 the association of the disease with giant cell arteritis (causing an inflammatory vasculopathy, which may lead to subclinical arteritis, stenosis or aneurysms),14 and the adverse effects of long-term corticosteroid treatment (e.g., diabetes, hypertension and dyslipidemia).15,16 Paradoxically, however, use of corticosteroids in patients with polymyalgia rheumatica may actually decrease vascular risk by controlling inflammation.17 A recent systematic review concluded that although some evidence exists to support an association between vascular disease and polymyalgia rheumatica,18 the existing literature presents conflicting results, with some studies reporting an excess risk of vascular disease19,20 and vascular death,21,22 and others reporting no association.2326 Most current studies are limited by poor methodologic quality and small samples, and are based on secondary care cohorts, who may have more severe disease, yet most patients with polymyalgia rheumatica receive treatment exclusively in primary care.27The General Practice Research Database (GPRD), based in the United Kingdom, is a large electronic system for primary care records. It has been used as a data source for previous studies,28 including studies on the association of inflammatory conditions with vascular disease29 and on the epidemiology of polymyalgia rheumatica in the UK.30 The aim of the current study was to examine the association between polymyalgia rheumatica and vascular disease in a primary care population.  相似文献   

18.

Background:

Many people with depression experience repeated episodes. Previous research into the predictors of chronic depression has focused primarily on the clinical features of the disease; however, little is known about the broader spectrum of sociodemographic and health factors inherent in its development. Our aim was to identify factors associated with a long-term negative prognosis of depression.

Methods:

We included 585 people aged 16 years and older who participated in the 2000/01 cycle of the National Population Health Survey and who reported experiencing a major depressive episode in 2000/01. The primary outcome was the course of depression until 2006/07. We grouped individuals into trajectories of depression using growth trajectory models. We included demographic, mental and physical health factors as predictors in the multivariable regression model to compare people with different trajectories.

Results:

Participants fell into two main depression trajectories: those whose depression resolved and did not recur (44.7%) and those who experienced repeated episodes (55.3%). In the multivariable model, daily smoking (OR 2.68, 95% CI 1.54–4.67), low mastery (i.e., feeling that life circumstances are beyond one’s control) (OR 1.10, 95% CI 1.03–1.18) and history of depression (OR 3.5, 95% CI 1.95–6.27) were significant predictors (p < 0.05) of repeated episodes of depression.

Interpretation:

People with major depression who were current smokers or had low levels of mastery were at an increased risk of repeated episodes of depression. Future studies are needed to confirm the predictive value of these variables and to evaluate their accuracy for diagnosis and as a guide to treatment.Depression is a common and often recurrent disorder that compromises daily functioning and is associated with a decrease in quality of life.13 Guidelines for the treatment of depression, such as those published by the Canadian Network for Mood and Anxiety Treatments (CANMAT)5 and the National Institute for Health and Clinical Excellence (NICE) in the United Kingdom,4 often recommend antidepressant treatment in patients with severe symptoms and outline specific risk factors supporting long-term treatment maintenance.4,5 However, for patients who do not meet the criteria for treatment of depression, the damaging sequelae of depression are frequently compounded without treatment.5 In such cases, early treatment for depression may result in an improved long-term prognosis.68A small but growing number of studies have begun to characterize the long-term course of depression in terms of severity,9 life-time prevalence10 and patterns of recurrence.11 However, a recent systematic review of the risk factors of chronic depression highlighted a need for longitudinal studies to better identify prognostic factors.12 The capacity to distinguish long-term patterns of recurrence of depression in relation to the wide range of established clinical and nonclinical factors for depression could be highly beneficial. Our objective was to use a population-based cohort to identify and understand the baseline factors associated with a long-term negative prognosis of depression.  相似文献   

19.

Background

Screening for increased waist circumference and hypertriglyceridemia (the hypertriglyceridemic-waist phenotype) has been proposed as an inexpensive approach to identify patients with excess intra-abdominal adiposity and associated metabolic abnormalities. We examined the relationship between the hypertriglyceridemic-waist phenotype to the risk of coronary artery disease in apparently healthy individuals.

Methods

A total of 21 787 participants aged 45–79 years were followed for a mean of 9.8 (standard deviation 1.7) years. Coronary artery disease developed in 2109 of them during follow-up. The hypertriglyceridemic-waist phenotype was defined as a waist circumference of 90 cm or more and a triglyceride level of 2.0 mmol/L or more in men, and a waist circumference of 85 cm or more and a triglyceride level of 1.5 mmol/L or more in women.

Results

Compared with participants who had a waist circumference and triglyceride level below the threshold, those with the hypertriglyceridemic-waist phenotype had higher blood pressure indices, higher levels of apolipoprotein B and C-reactive protein, lower levels of high-density lipoprotein cholesterol and apolipoprotein A-I, and smaller low-density lipoprotein particles. Among men, those with the hypertriglyceridemic-waist phenotype had an unadjusted hazard ratio for future coronary artery disease of 2.40 (95% confidence interval [CI] 2.02–2.87) compared with men who did not have the phenotype. Women with the phenotype had an unadjusted hazard ratio of 3.84 (95% CI 3.20–4.62) compared with women who did not have the phenotype.

Interpretation

Among participants from a European cohort representative of a contemporary Western population, the hypertriglyceridemic-waist phenotype was associated with a deteriorated cardiometabolic risk profile and an increased risk for coronary artery disease.Although obesity is a health hazard, not every obese person has the expected metabolic abnormalities associated with excess body fat.1,2 Epidemiologic and metabolic studies have shown that the metabolic complications of overweight and obesity are more related to the localization rather than to the amount of total body fat.3,4 Imaging studies using techniques such as computed tomography or magnetic resonance imaging have shown that, among equally obese individuals, those with an excess of intra-abdominal or visceral adipose tissue have metabolic abnormalities and are at increased risk of coronary artery disease and type 2 diabetes.57The systematic measurement of waist circumference has been proposed as a crude anthropometric correlate of intra-abdominal adiposity.8 However, because waist circumference cannot fully discriminate intra-abdominal from subcutaneous abdominal adiposity, we previously suggested that the presence of elevated triglyceride levels could be used as a marker of “dysfunctional” adipose tissue, intra-abdominal obesity and associated metabolic abnormalities in people with an increased waistline.911 What we had initially described as the hypertriglyceridemic-waist phenotype — the combination of an increased waist circumference and hypertriglyceridemia —could be a useful and inexpensive screening tool to identify people at increased risk of coronary artery disease and type 2 diabetes.1214 In this article, we report on the performance of the hypertriglyceridemic-waist phenotype as a screening tool among participants enrolled in the European Prospective Investigation into Cancer and Nutrition (EPIC)-Norfolk study.  相似文献   

20.

Background

Chest pain can be caused by various conditions, with life-threatening cardiac disease being of greatest concern. Prediction scores to rule out coronary artery disease have been developed for use in emergency settings. We developed and validated a simple prediction rule for use in primary care.

Methods

We conducted a cross-sectional diagnostic study in 74 primary care practices in Germany. Primary care physicians recruited all consecutive patients who presented with chest pain (n = 1249) and recorded symptoms and findings for each patient (derivation cohort). An independent expert panel reviewed follow-up data obtained at six weeks and six months on symptoms, investigations, hospital admissions and medications to determine the presence or absence of coronary artery disease. Adjusted odds ratios of relevant variables were used to develop a prediction rule. We calculated measures of diagnostic accuracy for different cut-off values for the prediction scores using data derived from another prospective primary care study (validation cohort).

Results

The prediction rule contained five determinants (age/sex, known vascular disease, patient assumes pain is of cardiac origin, pain is worse during exercise, and pain is not reproducible by palpation), with the score ranging from 0 to 5 points. The area under the curve (receiver operating characteristic curve) was 0.87 (95% confidence interval [CI] 0.83–0.91) for the derivation cohort and 0.90 (95% CI 0.87–0.93) for the validation cohort. The best overall discrimination was with a cut-off value of 3 (positive result 3–5 points; negative result ≤ 2 points), which had a sensitivity of 87.1% (95% CI 79.9%–94.2%) and a specificity of 80.8% (77.6%–83.9%).

Interpretation

The prediction rule for coronary artery disease in primary care proved to be robust in the validation cohort. It can help to rule out coronary artery disease in patients presenting with chest pain in primary care.Chest pain is common. Studies have shown a lifetime prevalence of 20% to 40% in the general population.1 Its prevalence in primary care ranges from 0.7% to 2.7% depending on inclusion criteria and country,24 with coronary artery disease being the underlying cause in about 12% of primary care patients.1,5 General practitioners are challenged to identify serious cardiac disease reliably and also protect patients from unnecessary investigations and hospital admissions. Because electrocardiography and the cardiac troponin test are of limited value in primary care,6,7 history taking and physical examination remain the main diagnostic tools.Most published studies on the diagnostic accuracy of signs and symptoms for acute coronary events have been conducted in high-prevalence settings such as hospital emergency departments.810 Predictive scores have also been developed for use in emergency departments, mainly for the diagnosis of acute coronary syndromes.1113 To what degree these apply in primary care is unknown.1416A clinical prediction score to rule out coronary artery disease in general practice has been developed.17 However, it did not perform well when validated externally. The aim of our study was to develop a simple, valid and usable prediction score based on signs and symptoms to help primary care physicians rule out coronary artery disease in patients presenting with chest pain.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号