首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Background

The pathogenesis of appendicitis is unclear. We evaluated whether exposure to air pollution was associated with an increased incidence of appendicitis.

Methods

We identified 5191 adults who had been admitted to hospital with appendicitis between Apr. 1, 1999, and Dec. 31, 2006. The air pollutants studied were ozone, nitrogen dioxide, sulfur dioxide, carbon monoxide, and suspended particulate matter of less than 10 μ and less than 2.5 μ in diameter. We estimated the odds of appendicitis relative to short-term increases in concentrations of selected pollutants, alone and in combination, after controlling for temperature and relative humidity as well as the effects of age, sex and season.

Results

An increase in the interquartile range of the 5-day average of ozone was associated with appendicitis (odds ratio [OR] 1.14, 95% confidence interval [CI] 1.03–1.25). In summer (July–August), the effects were most pronounced for ozone (OR 1.32, 95% CI 1.10–1.57), sulfur dioxide (OR 1.30, 95% CI 1.03–1.63), nitrogen dioxide (OR 1.76, 95% CI 1.20–2.58), carbon monoxide (OR 1.35, 95% CI 1.01–1.80) and particulate matter less than 10 μ in diameter (OR 1.20, 95% CI 1.05–1.38). We observed a significant effect of the air pollutants in the summer months among men but not among women (e.g., OR for increase in the 5-day average of nitrogen dioxide 2.05, 95% CI 1.21–3.47, among men and 1.48, 95% CI 0.85–2.59, among women). The double-pollutant model of exposure to ozone and nitrogen dioxide in the summer months was associated with attenuation of the effects of ozone (OR 1.22, 95% CI 1.01–1.48) and nitrogen dioxide (OR 1.48, 95% CI 0.97–2.24).

Interpretation

Our findings suggest that some cases of appendicitis may be triggered by short-term exposure to air pollution. If these findings are confirmed, measures to improve air quality may help to decrease rates of appendicitis.Appendicitis was introduced into the medical vernacular in 1886.1 Since then, the prevailing theory of its pathogenesis implicated an obstruction of the appendiceal orifice by a fecalith or lymphoid hyperplasia.2 However, this notion does not completely account for variations in incidence observed by age,3,4 sex,3,4 ethnic background,3,4 family history,5 temporal–spatial clustering6 and seasonality,3,4 nor does it completely explain the trends in incidence of appendicitis in developed and developing nations.3,7,8The incidence of appendicitis increased dramatically in industrialized nations in the 19th century and in the early part of the 20th century.1 Without explanation, it decreased in the middle and latter part of the 20th century.3 The decrease coincided with legislation to improve air quality. For example, after the United States Clean Air Act was passed in 1970,9 the incidence of appendicitis decreased by 14.6% from 1970 to 1984.3 Likewise, a 36% drop in incidence was reported in the United Kingdom between 1975 and 199410 after legislation was passed in 1956 and 1968 to improve air quality and in the 1970s to control industrial sources of air pollution. Furthermore, appendicitis is less common in developing nations; however, as these countries become more industrialized, the incidence of appendicitis has been increasing.7Air pollution is known to be a risk factor for multiple conditions, to exacerbate disease states and to increase all-cause mortality.11 It has a direct effect on pulmonary diseases such as asthma11 and on nonpulmonary diseases including myocardial infarction, stroke and cancer.1113 Inflammation induced by exposure to air pollution contributes to some adverse health effects.1417 Similar to the effects of air pollution, a proinflammatory response has been associated with appendicitis.1820We conducted a case–crossover study involving a population-based cohort of patients admitted to hospital with appendicitis to determine whether short-term increases in concentrations of selected air pollutants were associated with hospital admission because of appendicitis.  相似文献   

2.

Background

Cryotherapy is widely used for the treatment of cutaneous warts in primary care. However, evidence favours salicylic acid application. We compared the effectiveness of these treatments as well as a wait-and-see approach.

Methods

Consecutive patients with new cutaneous warts were recruited in 30 primary care practices in the Netherlands between May 1, 2006, and Jan. 26, 2007. We randomly allocated eligible patients to one of three groups: cryotherapy with liquid nitrogen every two weeks, self-application of salicylic acid daily or a wait-and-see approach. The primary outcome was the proportion of participants whose warts were all cured at 13 weeks. Analysis was on an intention-to-treat basis. Secondary outcomes included treatment adherence, side effects and treatment satisfaction. Research nurses assessed outcomes during home visits at 4, 13 and 26 weeks.

Results

Of the 250 participants (age 4 to 79 years), 240 were included in the analysis at 13 weeks (loss to follow-up 4%). Cure rates were 39% (95% confidence interval [CI] 29%–51%) in the cryotherapy group, 24% (95% CI 16%–35%) in the salicylic acid group and 16% (95% CI 9.5%–25%) in the wait-and-see group. Differences in effectiveness were most pronounced among participants with common warts (n = 116): cure rates were 49% (95% CI 34%–64%) in the cryotherapy group, 15% (95% CI 7%–30%) in the salicylic acid group and 8% (95% CI 3%–21%) in the wait-and-see group. Cure rates among the participants with plantar warts (n = 124) did not differ significantly between treatment groups.

Interpretation

For common warts, cryotherapy was the most effective therapy in primary care. For plantar warts, we found no clinically relevant difference in effectiveness between cryotherapy, topical application of salicylic acid or a wait-and-see approach after 13 weeks. (ClinicalTrial.gov registration no. ISRCTN42730629)Cutaneous warts are common.13 Up to one-third of primary school children have warts, of which two-thirds resolve within two years.4,5 Because warts frequently result in discomfort,6 2% of the general population and 6% of school-aged children each year present with warts to their family physician.7,8 The usual treatment is cryotherapy with liquid nitrogen or, less frequently, topical application of salicylic acid.912 Some physicians choose a wait-and-see approach because of the benign natural course of warts and the risk of side effects of treatment.10,11A recent Cochrane review on treatments of cutaneous warts concluded that available studies were small, poorly designed or limited to dermatology outpatients.10,11 Evidence on cryotherapy was contradictory,1318 whereas the evidence on salicylic acid was more convincing.1923 However, studies that compared cryotherapy and salicylic acid directly showed no differences in effectiveness.24,25 The Cochrane review called for high-quality trials in primary care to compare the effects of cryotherapy, salicylic acid and placebo.We conducted a three-arm randomized controlled trial to compare the effectiveness of cryotherapy with liquid nitrogen, topical application of salicylic acid and a wait-and-see approach for the treatment of common and plantar warts in primary care.  相似文献   

3.

Background:

Polymyalgia rheumatica is one of the most common inflammatory rheumatologic conditions in older adults. Other inflammatory rheumatologic disorders are associated with an excess risk of vascular disease. We investigated whether polymyalgia rheumatica is associated with an increased risk of vascular events.

Methods:

We used the General Practice Research Database to identify patients with a diagnosis of incident polymyalgia rheumatica between Jan. 1, 1987, and Dec. 31, 1999. Patients were matched by age, sex and practice with up to 5 patients without polymyalgia rheumatica. Patients were followed until their first vascular event (cardiovascular, cerebrovascular, peripheral vascular) or the end of available records (May 2011). All participants were free of vascular disease before the diagnosis of polymyalgia rheumatica (or matched date). We used Cox regression models to compare time to first vascular event in patients with and without polymyalgia rheumatica.

Results:

A total of 3249 patients with polymyalgia rheumatica and 12 735 patients without were included in the final sample. Over a median follow-up period of 7.8 (interquartile range 3.3–12.4) years, the rate of vascular events was higher among patients with polymyalgia rheumatica than among those without (36.1 v. 12.2 per 1000 person-years; adjusted hazard ratio 2.6, 95% confidence interval 2.4–2.9). The increased risk of a vascular event was similar for each vascular disease end point. The magnitude of risk was higher in early disease and in patients younger than 60 years at diagnosis.

Interpretation:

Patients with polymyalgia rheumatica have an increased risk of vascular events. This risk is greatest in the youngest age groups. As with other forms of inflammatory arthritis, patients with polymyalgia rheumatica should have their vascular risk factors identified and actively managed to reduce this excess risk.Inflammatory rheumatologic disorders such as rheumatoid arthritis,1,2 systemic lupus erythematosus,2,3 gout,4 psoriatic arthritis2,5 and ankylosing spondylitis2,6 are associated with an increased risk of vascular disease, especially cardiovascular disease, leading to substantial morbidity and premature death.26 Recognition of this excess vascular risk has led to management guidelines advocating screening for and management of vascular risk factors.79Polymyalgia rheumatica is one of the most common inflammatory rheumatologic conditions in older adults,10 with a lifetime risk of 2.4% for women and 1.7% for men.11 To date, evidence regarding the risk of vascular disease in patients with polymyalgia rheumatica is unclear. There are a number of biologically plausible mechanisms between polymyalgia rheumatica and vascular disease. These include the inflammatory burden of the disease,12,13 the association of the disease with giant cell arteritis (causing an inflammatory vasculopathy, which may lead to subclinical arteritis, stenosis or aneurysms),14 and the adverse effects of long-term corticosteroid treatment (e.g., diabetes, hypertension and dyslipidemia).15,16 Paradoxically, however, use of corticosteroids in patients with polymyalgia rheumatica may actually decrease vascular risk by controlling inflammation.17 A recent systematic review concluded that although some evidence exists to support an association between vascular disease and polymyalgia rheumatica,18 the existing literature presents conflicting results, with some studies reporting an excess risk of vascular disease19,20 and vascular death,21,22 and others reporting no association.2326 Most current studies are limited by poor methodologic quality and small samples, and are based on secondary care cohorts, who may have more severe disease, yet most patients with polymyalgia rheumatica receive treatment exclusively in primary care.27The General Practice Research Database (GPRD), based in the United Kingdom, is a large electronic system for primary care records. It has been used as a data source for previous studies,28 including studies on the association of inflammatory conditions with vascular disease29 and on the epidemiology of polymyalgia rheumatica in the UK.30 The aim of the current study was to examine the association between polymyalgia rheumatica and vascular disease in a primary care population.  相似文献   

4.

Background:

Little evidence exists on the effect of an energy-unrestricted healthy diet on metabolic syndrome. We evaluated the long-term effect of Mediterranean diets ad libitum on the incidence or reversion of metabolic syndrome.

Methods:

We performed a secondary analysis of the PREDIMED trial — a multicentre, randomized trial done between October 2003 and December 2010 that involved men and women (age 55–80 yr) at high risk for cardiovascular disease. Participants were randomly assigned to 1 of 3 dietary interventions: a Mediterranean diet supplemented with extra-virgin olive oil, a Mediterranean diet supplemented with nuts or advice on following a low-fat diet (the control group). The interventions did not include increased physical activity or weight loss as a goal. We analyzed available data from 5801 participants. We determined the effect of diet on incidence and reversion of metabolic syndrome using Cox regression analysis to calculate hazard ratios (HRs) and 95% confidence intervals (CIs).

Results:

Over 4.8 years of follow-up, metabolic syndrome developed in 960 (50.0%) of the 1919 participants who did not have the condition at baseline. The risk of developing metabolic syndrome did not differ between participants assigned to the control diet and those assigned to either of the Mediterranean diets (control v. olive oil HR 1.10, 95% CI 0.94–1.30, p = 0.231; control v. nuts HR 1.08, 95% CI 0.92–1.27, p = 0.3). Reversion occurred in 958 (28.2%) of the 3392 participants who had metabolic syndrome at baseline. Compared with the control group, participants on either Mediterranean diet were more likely to undergo reversion (control v. olive oil HR 1.35, 95% CI 1.15–1.58, p < 0.001; control v. nuts HR 1.28, 95% CI 1.08–1.51, p < 0.001). Participants in the group receiving olive oil supplementation showed significant decreases in both central obesity and high fasting glucose (p = 0.02); participants in the group supplemented with nuts showed a significant decrease in central obesity.

Interpretation:

A Mediterranean diet supplemented with either extra virgin olive oil or nuts is not associated with the onset of metabolic syndrome, but such diets are more likely to cause reversion of the condition. An energy-unrestricted Mediterranean diet may be useful in reducing the risks of central obesity and hyperglycemia in people at high risk of cardiovascular disease. Trial registration: ClinicalTrials.gov, no. ISRCTN35739639.Metabolic syndrome is a cluster of 3 or more related cardiometabolic risk factors: central obesity (determined by waist circumference), hypertension, hypertriglyceridemia, low plasma high-density lipoprotein (HDL) cholesterol levels and hyperglycemia. Having the syndrome increases a person’s risk for type 2 diabetes and cardiovascular disease.1,2 In addition, the condition is associated with increased morbidity and all-cause mortality.1,35 The worldwide prevalence of metabolic syndrome in adults approaches 25%68 and increases with age,7 especially among women,8,9 making it an important public health issue.Several studies have shown that lifestyle modifications,10 such as increased physical activity,11 adherence to a healthy diet12,13 or weight loss,1416 are associated with reversion of the metabolic syndrome and its components. However, little information exists as to whether changes in the overall dietary pattern without weight loss might also be effective in preventing and managing the condition.The Mediterranean diet is recognized as one of the healthiest dietary patterns. It has shown benefits in patients with cardiovascular disease17,18 and in the prevention and treatment of related conditions, such as diabetes,1921 hypertension22,23 and metabolic syndrome.24Several cross-sectional2529 and prospective3032 epidemiologic studies have suggested an inverse association between adherence to the Mediterranean diet and the prevalence or incidence of metabolic syndrome. Evidence from clinical trials has shown that an energy-restricted Mediterranean diet33 or adopting a Mediterranean diet after weight loss34 has a beneficial effect on metabolic syndrome. However, these studies did not determine whether the effect could be attributed to the weight loss or to the diets themselves.Seminal data from the PREDIMED (PREvención con DIeta MEDiterránea) study suggested that adherence to a Mediterranean diet supplemented with nuts reversed metabolic syndrome more so than advice to follow a low-fat diet.35 However, the report was based on data from only 1224 participants followed for 1 year. We have analyzed the data from the final PREDIMED cohort after a median follow-up of 4.8 years to determine the long-term effects of a Mediterranean diet on metabolic syndrome.  相似文献   

5.
We report evidence that adenylate kinase (AK) from Escherichia coli can be activated by the direct binding of a magnesium ion to the enzyme, in addition to ATP-complexed Mg2+. By systematically varying the concentrations of AMP, ATP, and magnesium in kinetic experiments, we found that the apparent substrate inhibition of AK, formerly attributed to AMP, was suppressed at low magnesium concentrations and enhanced at high magnesium concentrations. This previously unreported magnesium dependence can be accounted for by a modified random bi-bi model in which Mg2+ can bind to AK directly prior to AMP binding. A new kinetic model is proposed to replace the conventional random bi-bi mechanism with substrate inhibition and is able to describe the kinetic data over a physiologically relevant range of magnesium concentrations. According to this model, the magnesium-activated AK exhibits a 23- ± 3-fold increase in its forward reaction rate compared with the unactivated form. The findings imply that Mg2+ could be an important affecter in the energy signaling network in cells.Adenylate kinase (AK)2 is a ∼24-kDa enzyme involved in cellular metabolism that catalyzes the reversible phosphoryl transfer reaction (1) as in Reaction 1. Mg2+ATP+AMPreverseforwardMg2+ADP+ADPREACTION 1It is recognized to play an important role in cellular energetic signaling networks (2, 3). A deficiency in human AK function may lead to such illness as hemolytic anemia (48) and coronary artery disease (9); the latter is thought to be caused by a disruption of the AMP signaling network of AK (10). The ubiquity of AK makes it an ideal candidate for investigating evolutionary divergence and natural adaptation at a molecular level (11, 12). Indeed, extensive structure-function studies have been carried out for AK (reviewed in Ref. 13). Both structural and biophysical studies have suggested that large-amplitude conformational changes in AK are important for catalysis (1419). More recently, the functional roles of conformational dynamics have been investigated using NMR (2022), computer simulations (2327), and single-molecule spectroscopy (28). Given the critical role of AK in regulating cellular energy networks and its use as a model system for understanding the functional roles of conformational changes in enzymes, it is imperative that the enzymatic mechanism of AK be thoroughly characterized and understood.The enzymatic reaction of adenylate kinase has been shown to follow a random bi-bi mechanism using isotope exchange experiments (29). Isoforms of adenylate kinases characterized from a wide range of species have a high degree of sequence, structure, and functional conservation. Although all AKs appear to follow the same random bi-bi mechanistic framework (15, 2933), a detailed kinetic analysis reveals interesting variations among different isoforms. For example, one of the most puzzling discrepancies is the change in turnover rates with increasing AMP concentration between rabbit muscle AK and Escherichia coli AK. Although the reactivity of rabbit muscle AK is slightly inhibited at higher AMP concentrations (29, 32), E. coli AK exhibits its maximum turnover rate around 0.2 mm AMP followed by a steep drop, which plateaus at still higher AMP concentrations (3335). This observation has been traditionally attributed to greater substrate inhibition by AMP in E. coli AK compared with the rabbit isoform; yet, the issue of whether the reaction involves competitive or non-competitive inhibition by AMP at the ATP binding site remains unresolved (15, 33, 3537).Here, we report a comprehensive kinetic study of the forward reaction of AK, exploring concentrations of nucleotides and Mg2+ that are comparable to those inside E. coli cells, [Mg2+] ∼ 1–2 mm (38) and [ATP] up to 3 mm (39). We discovered a previously unreported phenomenon: an increase in the forward reaction rate of AK with increasing Mg2+ concentrations, where the stoichiometry of Mg2+ to the enzyme is greater than one. The new observation leads us to propose an Mg2+-activation mechanism augmenting the commonly accepted random bi-bi model for E. coli AK. Our model can fully explain AK’s observed kinetic behavior involving AMP, ATP, and Mg2+ as substrates, out-performing the previous model requiring AMP inhibition. The new Mg2+-activation model also explains the discrepancies in AMP inhibition behavior and currently available E. coli AK kinetic data. Given the central role of AK in energy regulation and our new experimental evidence, it is possible that Mg2+ and its regulation may participate in respiratory network through AK (4042), an exciting future research direction.  相似文献   

6.
7.
Background:Rates of imaging for low-back pain are high and are associated with increased health care costs and radiation exposure as well as potentially poorer patient outcomes. We conducted a systematic review to investigate the effectiveness of interventions aimed at reducing the use of imaging for low-back pain.Methods:We searched MEDLINE, Embase, CINAHL and the Cochrane Central Register of Controlled Trials from the earliest records to June 23, 2014. We included randomized controlled trials, controlled clinical trials and interrupted time series studies that assessed interventions designed to reduce the use of imaging in any clinical setting, including primary, emergency and specialist care. Two independent reviewers extracted data and assessed risk of bias. We used raw data on imaging rates to calculate summary statistics. Study heterogeneity prevented meta-analysis.Results:A total of 8500 records were identified through the literature search. Of the 54 potentially eligible studies reviewed in full, 7 were included in our review. Clinical decision support involving a modified referral form in a hospital setting reduced imaging by 36.8% (95% confidence interval [CI] 33.2% to 40.5%). Targeted reminders to primary care physicians of appropriate indications for imaging reduced referrals for imaging by 22.5% (95% CI 8.4% to 36.8%). Interventions that used practitioner audits and feedback, practitioner education or guideline dissemination did not significantly reduce imaging rates. Lack of power within some of the included studies resulted in lack of statistical significance despite potentially clinically important effects.Interpretation:Clinical decision support in a hospital setting and targeted reminders to primary care doctors were effective interventions in reducing the use of imaging for low-back pain. These are potentially low-cost interventions that would substantially decrease medical expenditures associated with the management of low-back pain.Current evidence-based clinical practice guidelines recommend against the routine use of imaging in patients presenting with low-back pain.13 Despite this, imaging rates remain high,4,5 which indicates poor concordance with these guidelines.6,7Unnecessary imaging for low-back pain has been associated with poorer patient outcomes, increased radiation exposure and higher health care costs.8 No short- or long-term clinical benefits have been shown with routine imaging of the low back, and the diagnostic value of incidental imaging findings remains uncertain.912 A 2008 systematic review found that imaging accounted for 7% of direct costs associated with low-back pain, which in 1998 translated to more than US$6 billion in the United States and £114 million in the United Kingdom.13 Current costs are likely to be substantially higher, with an estimated 65% increase in spine-related expenditures between 1997 and 2005.14Various interventions have been tried for reducing imaging rates among people with low-back pain. These include strategies targeted at the practitioner such as guideline dissemination,1517 education workshops,18,19 audit and feedback of imaging use,7,20,21 ongoing reminders7 and clinical decision support.2224 It is unclear which, if any, of these strategies are effective.25 We conducted a systematic review to investigate the effectiveness of interventions designed to reduce imaging rates for the management of low-back pain.  相似文献   

8.

Background:

Brief interventions delivered by family physicians to address excessive alcohol use among adult patients are effective. We conducted a study to determine whether such an intervention would be similarly effective in reducing binge drinking and excessive cannabis use among young people.

Methods:

We conducted a cluster randomized controlled trial involving 33 family physicians in Switzerland. Physicians in the intervention group received training in delivering a brief intervention to young people during the consultation in addition to usual care. Physicians in the control group delivered usual care only. Consecutive patients aged 15–24 years were recruited from each practice and, before the consultation, completed a confidential questionnaire about their general health and substance use. Patients were followed up at 3, 6 and 12 months after the consultation. The primary outcome measure was self-reported excessive substance use (≥ 1 episode of binge drinking, or ≥ 1 joint of cannabis per week, or both) in the past 30 days.

Results:

Of the 33 participating physicians, 17 were randomly allocated to the intervention group and 16 to the control group. Of the 594 participating patients, 279 (47.0%) identified themselves as binge drinkers or excessive cannabis users, or both, at baseline. Excessive substance use did not differ significantly between patients whose physicians were in the intervention group and those whose physicians were in the control group at any of the follow-up points (odds ratio [OR] and 95% confidence interval [CI] at 3 months: 0.9 [0.6–1.4]; at 6 mo: 1.0 [0.6–1.6]; and at 12 mo: 1.1 [0.7–1.8]). The differences between groups were also nonsignificant after we re stricted the analysis to patients who reported excessive substance use at baseline (OR 1.6, 95% CI 0.9–2.8, at 3 mo; OR 1.7, 95% CI 0.9–3.2, at 6 mo; and OR 1.9, 95% CI 0.9–4.0, at 12 mo).

Interpretation:

Training family physicians to use a brief intervention to address excessive substance use among young people was not effective in reducing binge drinking and excessive cannabis use in this patient population. Trial registration: Australian New Zealand Clinical Trials Registry, no. ACTRN12608000432314.Most health-compromising behaviours begin in adolescence.1 Interventions to address these behaviours early are likely to bring long-lasting benefits.2 Harmful use of alcohol is a leading factor associated with premature death and disability worldwide, with a disproportionally high impact on young people (aged 10–24 yr).3,4 Similarly, early cannabis use can have adverse consequences that extend into adulthood.58In adolescence and early adulthood, binge drinking on at least a monthly basis is associated with an increased risk of adverse outcomes later in life.912 Although any cannabis use is potentially harmful, weekly use represents a threshold in adolescence related to an increased risk of cannabis (and tobacco) dependence in adulthood.13 Binge drinking affects 30%–50% and excessive cannabis use about 10% of the adolescent and young adult population in Europe and the United States.10,14,15Reducing substance-related harm involves multisectoral approaches, including promotion of healthy child and adolescent development, regulatory policies and early treatment interventions.16 Family physicians can add to the public health messages by personalizing their content within brief interventions.17,18 There is evidence that brief interventions can encourage young people to reduce substance use, yet most studies have been conducted in community settings (mainly educational), emergency services or specialized addiction clinics.1,16 Studies aimed at adult populations have shown favourable effects of brief alcohol interventions, and to some extent brief cannabis interventions, in primary care.1922 These interventions have been recommended for adolescent populations.4,5,16 Yet young people have different modes of substance use and communication styles that may limit the extent to which evidence from adult studies can apply to them.Recently, a systematic review of brief interventions to reduce alcohol use in adolescents identified only 1 randomized controlled trial in primary care.23 The tested intervention, not provided by family physicians but involving audio self-assessment, was ineffective in reducing alcohol use in exposed adolescents.24 Sanci and colleagues showed that training family physicians to address health-risk behaviours among adolescents was effective in improving provider performance, but the extent to which this translates into improved outcomes remains unknown.25,26 Two nonrandomized studies suggested screening for substance use and brief advice by family physicians could favour reduced alcohol and cannabis use among adolescents,27,28 but evidence from randomized trials is lacking.29We conducted the PRISM-Ado (Primary care Intervention Addressing Substance Misuse in Adolescents) trial, a cluster randomized controlled trial of the effectiveness of training family physicians to deliver a brief intervention to address binge drinking and excessive cannabis use among young people.  相似文献   

9.

Background

Little is known about the distribution of diagnoses that account for fatigue in patients in primary care. We evaluated the diagnoses established within 1 year after presentation with fatigue in primary care that were possibly associated with the fatigue.

Methods

We conducted a prospective observational cohort study with 1-year follow-up. We included adult patients who presented with a new episode of fatigue between June 2004 and January 2006. We extracted data on diagnoses during the follow-up period from the patients’ medical records as well as data on pre-existing chronic diseases.

Results

Of the 571 patients for whom diagnostic data were available, 268 (46.9%) had received one or more diagnoses that could be associated with fatigue. The diagnoses were diverse and mostly included symptom diagnoses, with main categories being musculoskeletal (19.4%) and psychological problems (16.5%). Clear somatic pathology was diagnosed in 47 (8.2%) of the patients. Most diagnoses were not made during the consultation when fatigue was presented.

Interpretation

Only a minority of patients were diagnosed with serious pathology. Half of the patients did not receive any diagnosis that could explain their fatigue. Nevertheless, because of the wide range of conditions and symptoms that may explain or co-occur with the fatigue, fatigue is a complex problem that deserves attention not only as a symptom of underlying specific disease.Fatigue is a common problem seen in primary care. It is reported as the main presenting symptom in 5% to 10% of patients.13 Both its nonspecific nature and its high prevalence make fatigue a challenging problem for general practitioners to manage. The symptom may indicate a wide range of conditions, including respiratory, cardiovascular, endocrine, gastrointestinal, hematologic, infectious, neurologic and musculoskeletal diseases, mood disorders, sleep disorders and cancer.413 Patients with a chronic disease often report symptoms of fatigue,14,15 and the prevalence of chronic disease is higher among patients presenting with fatigue than among other patients.16 Regardless of the underlying pathology, fatigue is a phenomenon with social, physiologic and psychological dimensions.1720Little is known about the distribution of diagnoses in populations of patients presenting with fatigue as a main symptom in primary care. A Dutch morbidity registration of episodes of care showed that fatigue was a symptom diagnosis in about 40% of patients.21 Previous studies involving patients presenting with fatigue as a main symptom either had small samples22,23 or reported diagnoses that were based on standardized laboratory testing at baseline.24,25 Because of the wide range of possible diagnoses, large observational studies are needed to determine the distribution of diagnoses in primary care.We carried out a prospective study involving patients in primary care practices in whom fatigue was the main presenting symptom. The aim of our study was to describe the distribution of diagnoses established within 1 year after presentation that were possibly associated with the fatigue.  相似文献   

10.

Background:

Persistent postoperative pain continues to be an underrecognized complication. We examined the prevalence of and risk factors for this type of pain after cardiac surgery.

Methods:

We enrolled patients scheduled for coronary artery bypass grafting or valve replacement, or both, from Feb. 8, 2005, to Sept. 1, 2009. Validated measures were used to assess (a) preoperative anxiety and depression, tendency to catastrophize in the face of pain, health-related quality of life and presence of persistent pain; (b) pain intensity and interference in the first postoperative week; and (c) presence and intensity of persistent postoperative pain at 3, 6, 12 and 24 months after surgery. The primary outcome was the presence of persistent postoperative pain during 24 months of follow-up.

Results:

A total of 1247 patients completed the preoperative assessment. Follow-up retention rates at 3 and 24 months were 84% and 78%, respectively. The prevalence of persistent postoperative pain decreased significantly over time, from 40.1% at 3 months to 22.1% at 6 months, 16.5% at 12 months and 9.5% at 24 months; the pain was rated as moderate to severe in 3.6% at 24 months. Acute postoperative pain predicted both the presence and severity of persistent postoperative pain. The more intense the pain during the first week after surgery and the more it interfered with functioning, the more likely the patients were to report persistent postoperative pain. Pre-existing persistent pain and increased preoperative anxiety also predicted the presence of persistent postoperative pain.

Interpretation:

Persistent postoperative pain of nonanginal origin after cardiac surgery affected a substantial proportion of the study population. Future research is needed to determine whether interventions to modify certain risk factors, such as preoperative anxiety and the severity of pain before and immediately after surgery, may help to minimize or prevent persistent postoperative pain.Postoperative pain that persists beyond the normal time for tissue healing (> 3 mo) is increasingly recognized as an important complication after various types of surgery and can have serious consequences on patients’ daily living.13 Cardiac surgeries, such as coronary artery bypass grafting (CABG) and valve replacement, rank among the most frequently performed interventions worldwide.4 They aim to improve survival and quality of life by reducing symptoms, including anginal pain. However, persistent postoperative pain of nonanginal origin has been reported in 7% to 60% of patients following these surgeries.523 Such variability is common in other types of major surgery and is due mainly to differences in the definition of persistent postoperative pain, study design, data collection methods and duration of follow-up.13,24Few prospective cohort studies have examined the exact time course of persistent postoperative pain after cardiac surgery, and follow-up has always been limited to a year or less.9,14,25 Factors that put patients at risk of this type of problem are poorly understood.26 Studies have reported inconsistent results regarding the contribution of age, sex, body mass index, preoperative angina, surgical technique, grafting site, postoperative complications or level of opioid consumption after surgery.57,9,13,14,1619,2123,25,27 Only 1 study investigated the role of chronic nonanginal pain before surgery as a contributing factor;21 5 others prospectively assessed the association between persistent postoperative pain and acute pain intensity in the first postoperative week but reported conflicting results.13,14,21,22,25 All of the above studies were carried out in a single hospital and included relatively small samples. None of the studies examined the contribution of psychological factors such as levels of anxiety and depression before cardiac surgery, although these factors have been shown to influence acute or persistent postoperative pain in other types of surgery.1,24,28,29We conducted a prospective multicentre cohort study (the CARD-PAIN study) to determine the prevalence of persistent postoperative pain of nonanginal origin up to 24 months after cardiac surgery and to identify risk factors for the presence and severity of the condition.  相似文献   

11.

Background

Fractures have largely been assessed by their impact on quality of life or health care costs. We conducted this study to evaluate the relation between fractures and mortality.

Methods

A total of 7753 randomly selected people (2187 men and 5566 women) aged 50 years and older from across Canada participated in a 5-year observational cohort study. Incident fractures were identified on the basis of validated self-report and were classified by type (vertebral, pelvic, forearm or wrist, rib, hip and “other”). We subdivided fracture groups by the year in which the fracture occurred during follow-up; those occurring in the fourth and fifth years were grouped together. We examined the relation between the time of the incident fracture and death.

Results

Compared with participants who had no fracture during follow-up, those who had a vertebral fracture in the second year were at increased risk of death (adjusted hazard ratio [HR] 2.7, 95% confidence interval [CI] 1.1–6.6); also at risk were those who had a hip fracture during the first year (adjusted HR 3.2, 95% CI 1.4–7.4). Among women, the risk of death was increased for those with a vertebral fracture during the first year (adjusted HR 3.7, 95% CI 1.1–12.8) or the second year of follow-up (adjusted HR 3.2, 95% CI 1.2–8.1). The risk of death was also increased among women with hip fracture during the first year of follow-up (adjusted HR 3.0, 95% CI 1.0–8.7).

Interpretation

Vertebral and hip fractures are associated with an increased risk of death. Interventions that reduce the incidence of these fractures need to be implemented to improve survival.Osteoporosis-related fractures are a major health concern, affecting a growing number of individuals worldwide. The burden of fracture has largely been assessed by the impact on health-related quality of life and health care costs.1,2 Fractures can also be associated with death. However, trials that have examined the relation between fractures and mortality have had limitations that may influence their results and the generalizability of the studies, including small samples,3,4 the examination of only 1 type of fracture,410 the inclusion of only women,8,11 the enrolment of participants from specific areas (i.e., hospitals or certain geographic regions),3,4,7,8,10,12 the nonrandom selection of participants311 and the lack of statistical adjustment for confounding factors that may influence mortality.3,57,12We evaluated the relation between incident fractures and mortality over a 5-year period in a cohort of men and women 50 years of age and older. In addition, we examined whether other characteristics of participants were risk factors for death.  相似文献   

12.

Background:

Screening for methicillin-resistant Staphylococcus aureus (MRSA) is intended to reduce nosocomial spread by identifying patients colonized by MRSA. Given the widespread use of this screening, we evaluated its potential clinical utility in predicting the resistance of clinical isolates of S. aureus.

Methods:

We conducted a 2-year retrospective cohort study that included patients with documented clinical infection with S. aureus and prior screening for MRSA. We determined test characteristics, including sensitivity and specificity, of screening for predicting the resistance of subsequent S. aureus isolates.

Results:

Of 510 patients included in the study, 53 (10%) had positive results from MRSA screening, and 79 (15%) of infecting isolates were resistant to methicillin. Screening for MRSA predicted methicillin resistance of the infecting isolate with 99% (95% confidence interval [CI] 98%–100%) specificity and 63% (95% CI 52%–74%) sensitivity. When screening swabs were obtained within 48 hours before isolate collection, sensitivity increased to 91% (95% CI 71%–99%) and specificity was 100% (95% CI 97%–100%), yielding a negative likelihood ratio of 0.09 (95% CI 0.01–0.3) and a negative predictive value of 98% (95% CI 95%–100%). The time between swab and isolate collection was a significant predictor of concordance of methicillin resistance in swabs and isolates (odds ratio 6.6, 95% CI 1.6–28.2).

Interpretation:

A positive result from MRSA screening predicted methicillin resistance in a culture-positive clinical infection with S. aureus. Negative results on MRSA screening were most useful for excluding methicillin resistance of a subsequent infection with S. aureus when the screening swab was obtained within 48 hours before collection of the clinical isolate.Antimicrobial resistance is a global problem. The prevalence of resistant bacteria, including methicillin-resistant Staphylococcus aureus (MRSA), has reached high levels in many countries.13 Methicillin resistance in S. aureus is associated with excess mortality, hospital stays and health care costs,3,4 possibly owing to increased virulence or less effective treatments for MRSA compared with methicillin-sensitive S. aureus (MSSA).5The initial selection of appropriate empirical antibiotic treatment affects mortality, morbidity and potential health care expenditures.68 The optimal choice of antibiotics in S. aureus infections is important for 3 major reasons: β-lactam antibiotics have shown improved efficacy over vancomycin and are the ideal treatment for susceptible strains of S. aureus;6 β-lactam antibiotics are ineffective against MRSA, and so vancomycin or other newer agents must be used empirically when MRSA is suspected; and unnecessary use of broad-spectrum antibiotics (e.g., vancomycin) can lead to the development of further antimicrobial resistance.9 It is therefore necessary to make informed decisions regarding selection of empirical antibiotics.1013 Consideration of a patient’s previous colonization status is important, because colonization predates most hospital and community-acquired infections.10,14Universal or targeted surveillance for MRSA has been implemented widely as a means of limiting transmission of this antibiotic-resistant pathogen.15,16 Although results of MRSA screening are not intended to guide empirical treatment, they may offer an additional benefit among patients in whom clinical infection with S. aureus develops.Studies that examined the effects of MRSA carriage on the subsequent likelihood of infection allude to the potential diagnostic benefit of prior screening for MRSA.17,18 Colonization by MRSA at the time of hospital admission is associated with a 13-fold increased risk of subsequent MRSA infection.17,18 Moreover, studies that examined nasal carriage of S. aureus after documented S. aureus bacteremia have shown remarkable concordance between the genotypes of paired colonizing and invasive strains (82%–94%).19,20 The purpose of our study was to identify the usefulness of prior screening for MRSA for predicting methicillin resistance in culture-positive S. aureus infections.  相似文献   

13.

Background

There is controversy about which children with minor head injury need to undergo computed tomography (CT). We aimed to develop a highly sensitive clinical decision rule for the use of CT in children with minor head injury.

Methods

For this multicentre cohort study, we enrolled consecutive children with blunt head trauma presenting with a score of 13–15 on the Glasgow Coma Scale and loss of consciousness, amnesia, disorientation, persistent vomiting or irritability. For each child, staff in the emergency department completed a standardized assessment form before any CT. The main outcomes were need for neurologic intervention and presence of brain injury as determined by CT. We developed a decision rule by using recursive partitioning to combine variables that were both reliable and strongly associated with the outcome measures and thus to find the best combinations of predictor variables that were highly sensitive for detecting the outcome measures with maximal specificity.

Results

Among the 3866 patients enrolled (mean age 9.2 years), 95 (2.5%) had a score of 13 on the Glasgow Coma Scale, 282 (7.3%) had a score of 14, and 3489 (90.2%) had a score of 15. CT revealed that 159 (4.1%) had a brain injury, and 24 (0.6%) underwent neurologic intervention. We derived a decision rule for CT of the head consisting of four high-risk factors (failure to reach score of 15 on the Glasgow coma scale within two hours, suspicion of open skull fracture, worsening headache and irritability) and three additional medium-risk factors (large, boggy hematoma of the scalp; signs of basal skull fracture; dangerous mechanism of injury). The high-risk factors were 100.0% sensitive (95% CI 86.2%–100.0%) for predicting the need for neurologic intervention and would require that 30.2% of patients undergo CT. The medium-risk factors resulted in 98.1% sensitivity (95% CI 94.6%–99.4%) for the prediction of brain injury by CT and would require that 52.0% of patients undergo CT.

Interpretation

The decision rule developed in this study identifies children at two levels of risk. Once the decision rule has been prospectively validated, it has the potential to standardize and improve the use of CT for children with minor head injury.Each year more than 650 000 children are seen in hospital emergency departments in North America with “minor head injury,” i.e., history of loss of consciousness, amnesia or disorientation in a patient who is conscious and responsive in the emergency department (Glasgow Coma Scale score1 13–15). Although most patients with minor head injury can be discharged after a period of observation, a small proportion experience deterioration of their condition and need to undergo neurosurgical intervention for intracranial hematoma.24 The use of computed tomography (CT) in the emergency department is important in the early diagnosis of these intracranial hematomas.Over the past decade the use of CT for minor head injury has become increasingly common, while its diagnostic yield has remained low. In Canadian pediatric emergency departments the use of CT for minor head injury increased from 15% in 1995 to 53% in 2005.5,6 Despite this increase, a small but important number of pediatric intracranial hematomas are missed in Canadian emergency departments at the first visit.3 Few children with minor head injury have a visible brain injury on CT (4%–7%), and only 0.5% have an intracranial lesion requiring urgent neurosurgical intervention.5,7 The increased use of CT adds substantially to health care costs and exposes a large number of children each year to the potentially harmful effects of ionizing radiation.8,9 Currently, there are no widely accepted, evidence-based guidelines on the use of CT for children with minor head injury.A clinical decision rule incorporates three or more variables from the history, physical examination or simple tests10.11 into a tool that helps clinicians to make diagnostic or therapeutic decisions at the bedside. Members of our group have developed decision rules to allow physicians to be more selective in the use of radiography for children with injuries of the ankle12 and knee,13 as well as for adults with injuries of the ankle,1417 knee,1820 head21,22 and cervical spine.23,24 The aim of this study was to prospectively derive an accurate and reliable clinical decision rule for the use of CT for children with minor head injury.  相似文献   

14.
Sonja A. Swanson  Ian Colman 《CMAJ》2013,185(10):870-877

Background:

Ecological studies support the hypothesis that suicide may be “contagious” (i.e., exposure to suicide may increase the risk of suicide and related outcomes). However, this association has not been adequately assessed in prospective studies. We sought to determine the association between exposure to suicide and suicidality outcomes in Canadian youth.

Methods:

We used baseline information from the Canadian National Longitudinal Survey of Children and Youth between 1998/99 and 2006/07 with follow-up assessments 2 years later. We included all respondents aged 12–17 years in cycles 3–7 with reported measures of exposure to suicide.

Results:

We included 8766 youth aged 12–13 years, 7802 aged 14–15 years and 5496 aged 16–17 years. Exposure to a schoolmate’s suicide was associated with ideation at baseline among respondents aged 12–13 years (odds ratio [OR] 5.06, 95% confidence interval [CI] 3.04–8.40), 14–15 years (OR 2.93, 95% CI 2.02–4.24) and 16–17 years (OR 2.23, 95% CI 1.43–3.48). Such exposure was associated with attempts among respondents aged 12–13 years (OR 4.57, 95% CI 2.39–8.71), 14–15 years (OR 3.99, 95% CI 2.46–6.45) and 16–17 years (OR 3.22, 95% CI 1.62–6.41). Personally knowing someone who died by suicide was associated with suicidality outcomes for all age groups. We also assessed 2-year outcomes among respondents aged 12–15 years: a schoolmate’s suicide predicted suicide attempts among participants aged 12–13 years (OR 3.07, 95% CI 1.05–8.96) and 14–15 years (OR 2.72, 95% CI 1.47–5.04). Among those who reported a schoolmate’s suicide, personally knowing the decedent did not alter the risk of suicidality.

Interpretation:

We found that exposure to suicide predicts suicide ideation and attempts. Our results support school-wide interventions over current targeted interventions, particularly over strategies that target interventions toward children closest to the decedent.Suicidal thoughts and behaviours are prevalent13 and severe47 among adolescents. One hypothesized cause of suicidality is “suicide contagion” (i.e., exposure to suicide or related behaviours influences others to contemplate, attempt or die by suicide).8 Ecological studies support this theory: suicide and suspected suicide rates increase following a highly publicized suicide.911 However, such studies are prone to ecological fallacy and do not allow for detailed understanding of who may be most vulnerable.Adolescents may be particularly susceptible to this contagion effect. More than 13% of adolescent suicides are potentially explained by clustering;1214 clustering may explain an even larger proportion of suicide attempts.15,16 Many local,17,18 national8,19 and international20 institutions recommend school- or community-level postvention strategies in the aftermath of a suicide to help prevent further suicides and suicidality. These postvention strategies typically focus on a short interval following the death (e.g., months) with services targeted toward the most at-risk individuals (e.g., those with depression).19In this study, we assessed the association between exposure to suicide and suicidal thoughts and attempts among youth, using both cross-sectional and prospective (2-yr follow-up) analyses in a population-based cohort of Canadian youth.  相似文献   

15.

Background

Observational studies and randomized controlled trials have yielded inconsistent findings about the association between the use of acid-suppressive drugs and the risk of pneumonia. We performed a systematic review and meta-analysis to summarize this association.

Methods

We searched three electronic databases (MEDLINE [PubMed], Embase and the Cochrane Library) from inception to Aug. 28, 2009. Two evaluators independently extracted data. Because of heterogeneity, we used random-effects meta-analysis to obtain pooled estimates of effect.

Results

We identified 31 studies: five case–control studies, three cohort studies and 23 randomized controlled trials. A meta-analysis of the eight observational studies showed that the overall risk of pneumonia was higher among people using proton pump inhibitors (adjusted odds ratio [OR] 1.27, 95% confidence interval [CI] 1.11–1.46, I2 90.5%) and histamine2 receptor antagonists (adjusted OR 1.22, 95% CI 1.09–1.36, I2 0.0%). In the randomized controlled trials, use of histamine2 receptor antagonists was associated with an elevated risk of hospital-acquired pneumonia (relative risk 1.22, 95% CI 1.01–1.48, I2 30.6%).

Interpretation

Use of a proton pump inhibitor or histamine2 receptor antagonist may be associated with an increased risk of both community- and hospital-acquired pneumonia. Given these potential adverse effects, clinicians should use caution in prescribing acid-suppressive drugs for patients at risk.Recently, the medical literature has paid considerable attention to unrecognized adverse effects of commonly used medications and their potential public health impact.1 One group of medications in widespread use is acid-suppressive drugs, which represent the second leading category of medication worldwide, with sales totalling US$26.9 billion in 2005.2Over the past 40 years, the development of potent acid-suppressive drugs, including proton pump inhibitors, has led to considerable improvements in the treatment of acid-related disorders of the upper gastrointestinal tract.3 Experts have generally viewed proton pump inhibitors as safe.4 However, potential complications such as gastrointestinal neoplasia, malabsorption of nutrients and increased susceptibility to infection have caused concern.5Of special interest is the possibility that acid-suppressive drugs could increase susceptibility to respiratory infections because these drugs increase gastric pH, thus allowing bacterial colonization.6,7 Several previous studies have shown that treatment with acid-suppressive drugs might be associated with an increased risk of respiratory tract infections8 and community-acquired pneumonia in adults6,7 and children.9 However, the association between use of acid-suppressive drugs and risk of pneumonia has been inconsistent.1013Given the widespread use of proton pump inhibitors and histamine2 receptor antagonists, clarifying the potential impact of acid-suppressive therapy on the risk of pneumonia is of great importance to public health.14 Previous meta-analyses have focused on the role of acid-suppressive drugs in preventing stress ulcer,11,13,15 but none have examined pneumonia as the primary outcome.The aim of this study was to summarize the association between the use of acid-suppressive drugs and the risk of pneumonia in observational studies and randomized controlled trials.  相似文献   

16.

Background

Recent studies have reported a high prevalence of relative adrenal insufficiency in patients with liver cirrhosis. However, the effect of corticosteroid replacement on mortality in this high-risk group remains unclear. We examined the effect of low-dose hydrocortisone in patients with cirrhosis who presented with septic shock.

Methods

We enrolled patients with cirrhosis and septic shock aged 18 years or older in a randomized double-blind placebo-controlled trial. Relative adrenal insufficiency was defined as a serum cortisol increase of less than 250 nmol/L or 9 μg/dL from baseline after stimulation with 250 μg of intravenous corticotropin. Patients were assigned to receive 50 mg of intravenous hydrocortisone or placebo every six hours until hemodynamic stability was achieved, followed by steroid tapering over eight days. The primary outcome was 28-day all-cause mortality.

Results

The trial was stopped for futility at interim analysis after 75 patients were enrolled. Relative adrenal insufficiency was diagnosed in 76% of patients. Compared with the placebo group (n = 36), patients in the hydrocortisone group (n = 39) had a significant reduction in vasopressor doses and higher rates of shock reversal (relative risk [RR] 1.58, 95% confidence interval [CI] 0.98–2.55, p = 0.05). Hydrocortisone use was not associated with a reduction in 28-day mortality (RR 1.17, 95% CI 0.92–1.49, p = 0.19) but was associated with an increase in shock relapse (RR 2.58, 95% CI 1.04–6.45, p = 0.03) and gastrointestinal bleeding (RR 3.00, 95% CI 1.08–8.36, p = 0.02).

Interpretation

Relative adrenal insufficiency was very common in patients with cirrhosis presenting with septic shock. Despite initial favourable effects on hemodynamic parameters, hydrocortisone therapy did not reduce mortality and was associated with an increase in adverse effects. (Current Controlled Trials registry no. ISRCTN99675218.)Cirrhosis is a leading cause of death worldwide,1 often with septic shock as the terminal event.29 Relative adrenal insufficiency shares similar features of distributive hyperdynamic shock with both cirrhosis and sepsis10,11 and increasingly has been reported to coexist with both conditions.11,12 The effect of low-dose hydrocortisone therapy on survival of critically ill patients in general with septic shock remains controversial, with conflicting results from randomized controlled trials1317 and meta-analyses.18,19 The effect of hydrocortisone therapy on mortality among patients with cirrhosis, who are known to be a group at high risk for relative adrenal insufficiency, has not been studied and hence was the objective of our study.  相似文献   

17.

Background

Belgium’s law on euthanasia allows only physicians to perform the act. We investigated the involvement of nurses in the decision-making and in the preparation and administration of life-ending drugs with a patient’s explicit request (euthanasia) or without an explicit request. We also examined factors associated with these deaths.

Methods

In 2007, we surveyed 1678 nurses who, in an earlier survey, had reported caring for one or more patients who received a potential life-ending decision within the year before the survey. Eligible nurses were surveyed about their most recent case.

Results

The response rate was 76%. Overall, 128 nurses reported having cared for a patient who received euthanasia and 120 for a patient who received life-ending drugs without his or her explicit request. Respectively, 64% (75/117) and 69% (81/118) of these nurses were involved in the physician’s decision-making process. More often this entailed an exchange of information on the patient’s condition or the patient’s or relatives’ wishes (45% [34/117] and 51% [41/118]) than sharing in the decision-making (24% [18/117] and 31% [25/118]). The life-ending drugs were administered by the nurse in 12% of the cases of euthanasia, as compared with 45% of the cases of assisted death without an explicit request. In both types of assisted death, the nurses acted on the physician’s orders but mostly in the physician’s absence. Factors significantly associated with a nurse administering the life-ending drugs included being a male nurse working in a hospital (odds ratio [OR] 40.07, 95% confidence interval [CI] 7.37–217.79) and the patient being over 80 years old (OR 5.57, 95% CI 1.98–15.70).

Interpretation

By administering the life-ending drugs in some of the cases of euthanasia, and in almost half of the cases without an explicit request from the patient, the nurses in our study operated beyond the legal margins of their profession.Medical end-of-life decisions with a possible or certain life-shortening effect occur often in end-of-life care.15 The most controversial and ethically debated medical practice is that in which drugs are administered with the intention of ending the patient’s life, whether at the patient’s explicit request (euthanasia) or not. The debate focuses mainly on the role and responsibilities of the physician.6 However, physicians worldwide have reported that nurses are also involved in these medical practices, mostly in the decision-making and sometimes in the administration of the life-ending drugs.13,79 Critical care,10 oncology11 and palliative care nurses12,13 have confirmed this by reporting their own involvement, particularly in cases of euthanasia.14,15In Belgium, the law permits physicians to perform euthanasia under strict requirements of due care, one of which is that they must discuss the request with the nurses involved.16 There are no further explicit stipulations determining the role of nurses in euthanasia. Physician-assisted death is legally regulated in some other countries as well (e.g., the Netherlands, Luxemburg and the US states of Oregon and Washington State), without specifying the role of nurses. Reports from nurses in these jurisdictions are scarce, apart from some that are limited to particular settings, or lack details about their involvement.13,14We conducted this study to investigate the involvement of nurses in Flanders, Belgium, in the decision-making and in the preparation and administration of life-ending drugs with, or without, a patient’s explicit request. We also examined patient- and nurse-related factors associated with the involvement of nurses in these deaths. In a related research article, Chambaere and colleagues describe the findings from a survey of physicians in Flanders about the practices of euthanasia and assisted suicide, and the use of life-ending drugs without an explicit request from the patient.17  相似文献   

18.

Background

Patients exposed to low-dose ionizing radiation from cardiac imaging and therapeutic procedures after acute myocardial infarction may be at increased risk of cancer.

Methods

Using an administrative database, we selected a cohort of patients who had an acute myocardial infarction between April 1996 and March 2006 and no history of cancer. We documented all cardiac imaging and therapeutic procedures involving low-dose ionizing radiation. The primary outcome was risk of cancer. Statistical analyses were performed using a time-dependent Cox model adjusted for age, sex and exposure to low-dose ionizing radiation from noncardiac imaging to account for work-up of cancer.

Results

Of the 82 861 patients included in the cohort, 77% underwent at least one cardiac imaging or therapeutic procedure involving low-dose ionizing radiation in the first year after acute myocardial infarction. The cumulative exposure to radiation from cardiac procedures was 5.3 milliSieverts (mSv) per patient-year, of which 84% occurred during the first year after acute myocardial infarction. A total of 12 020 incident cancers were diagnosed during the follow-up period. There was a dose-dependent relation between exposure to radiation from cardiac procedures and subsequent risk of cancer. For every 10 mSv of low-dose ionizing radiation, there was a 3% increase in the risk of age- and sex-adjusted cancer over a mean follow-up period of five years (hazard ratio 1.003 per milliSievert, 95% confidence interval 1.002–1.004).

Interpretation

Exposure to low-dose ionizing radiation from cardiac imaging and therapeutic procedures after acute myocardial infarction is associated with an increased risk of cancer.Studies involving atomic bomb survivors have documented an increased incidence of malignant neoplasm related to the radiation exposure.14 Survivors who were farther from the epicentre of the blast had a lower incidence of cancer, whereas those who were closer had a higher incidence.5 Similar risk estimates have been reported among workers in nuclear plants.6 However, little is known about the relation between exposure to low-dose ionizing radiation from medical procedures and the risk of cancer.In the past six decades since the atomic bomb explosions, most individuals worldwide have had minimal exposure to ionizing radiation. However, the recent increase in the use of medical imaging and therapeutic procedures involving low-dose ionizing radiation has led to a growing concern that individual patients may be at increased risk of cancer.712 Whereas strict regulatory control is placed on occupational exposure at work sites, no such control exists among patients who are exposed to such radiation.1316It is not only the frequency of these procedures that is increasing. Newer types of imaging procedures are using higher doses of low-dose ionizing radiation than those used with more traditional procedures.8,11 Among patients being evaluated for coronary artery disease, for example, coronary computed tomography is increasingly being used. This test may be used in addition to other tests such as nuclear scans, coronary angiography and percutaneous coronary intervention, each of which exposes the patient to low-dose ionizing radiation.12,1721 Imaging procedures provide information that can be used to predict the prognosis of patients with coronary artery disease. Since such predictions do not necessarily translate into better clinical outcomes,8,12 the prognostic value obtained from imaging procedures using low-dose ionizing radiation needs to be balanced against the potential for risk.Authors of several studies have estimated that the risk of cancer is not negligible among patients exposed to low-dose ionizing radiation.2227 To our knowledge, none of these studies directly linked cumulative exposure and cancer risk. We examined a cohort of patients who had acute myocardial infarction and measured the association between low-dose ionizing radiation from cardiac imaging and therapeutic procedures and the risk of cancer.  相似文献   

19.
The erythropoietin receptor (EpoR) was discovered and described in red blood cells (RBCs), stimulating its proliferation and survival. The target in humans for EpoR agonists drugs appears clear—to treat anemia. However, there is evidence of the pleitropic actions of erythropoietin (Epo). For that reason, rhEpo therapy was suggested as a reliable approach for treating a broad range of pathologies, including heart and cardiovascular diseases, neurodegenerative disorders (Parkinson’s and Alzheimer’s disease), spinal cord injury, stroke, diabetic retinopathy and rare diseases (Friedreich ataxia). Unfortunately, the side effects of rhEpo are also evident. A new generation of nonhematopoietic EpoR agonists drugs (asialoEpo, Cepo and ARA 290) have been investigated and further developed. These EpoR agonists, without the erythropoietic activity of Epo, while preserving its tissue-protective properties, will provide better outcomes in ongoing clinical trials. Nonhematopoietic EpoR agonists represent safer and more effective surrogates for the treatment of several diseases such as brain and peripheral nerve injury, diabetic complications, renal ischemia, rare diseases, myocardial infarction, chronic heart disease and others.In principle, the erythropoietin receptor (EpoR) was discovered and described in red blood cell (RBC) progenitors, stimulating its proliferation and survival. Erythropoietin (Epo) is mainly synthesized in fetal liver and adult kidneys (13). Therefore, it was hypothesized that Epo act exclusively on erythroid progenitor cells. Accordingly, the target in humans for EpoR agonists drugs (such as recombinant erythropoietin [rhEpo], in general, called erythropoiesis-simulating agents) appears clear (that is, to treat anemia). However, evidence of a kaleidoscope of pleitropic actions of Epo has been provided (4,5). The Epo/EpoR axis research involved an initial journey from laboratory basic research to clinical therapeutics. However, as a consequence of clinical observations, basic research on Epo/EpoR comes back to expand its clinical therapeutic applicability.Although kidney and liver have long been considered the major sources of synthesis, Epo mRNA expression has also been detected in the brain (neurons and glial cells), lung, heart, bone marrow, spleen, hair follicles, reproductive tract and osteoblasts (617). Accordingly, EpoR was detected in other cells, such as neurons, astrocytes, microglia, immune cells, cancer cell lines, endothelial cells, bone marrow stromal cells and cells of heart, reproductive system, gastrointestinal tract, kidney, pancreas and skeletal muscle (1827). Conversely, Sinclair et al.(28) reported data questioning the presence or function of EpoR on nonhematopoietic cells (endothelial, neuronal and cardiac cells), suggesting that further studies are needed to confirm the diversity of EpoR. Elliott et al.(29) also showed that EpoR is virtually undetectable in human renal cells and other tissues with no detectable EpoR on cell surfaces. These results have raised doubts about the preclinical basis for studies exploring pleiotropic actions of rhEpo (30).For the above-mentioned data, a return to basic research studies has become necessary, and many studies in animal models have been initiated or have already been performed. The effect of rhEpo administration on angiogenesis, myogenesis, shift in muscle fiber types and oxidative enzyme activities in skeletal muscle (4,31), cardiac muscle mitochondrial biogenesis (32), cognitive effects (31), antiapoptotic and antiinflammatory actions (3337) and plasma glucose concentrations (38) has been extensively studied. Neuro- and cardioprotection properties have been mainly described. Accordingly, rhEpo therapy was suggested as a reliable approach for treating a broad range of pathologies, including heart and cardiovascular diseases, neurodegenerative disorders (Parkinson’s and Alzheimer’s disease), spinal cord injury, stroke, diabetic retinopathy and rare diseases (Friedreich ataxia).Unfortunately, the side effects of rhEpo are also evident. Epo is involved in regulating tumor angiogenesis (39) and probably in the survival and growth of tumor cells (25,40,41). rhEpo administration also induces serious side effects such as hypertension, polycythemia, myocardial infarction, stroke and seizures, platelet activation and increased thromboembolic risk, and immunogenicity (4246), with the most common being hypertension (47,48). A new generation of nonhematopoietic EpoR agonists drugs have hence been investigated and further developed in animals models. These compounds, namely asialoerythropoietin (asialoEpo) and carbamylated Epo (Cepo), were developed for preserving tissue-protective properties but reducing the erythropoietic activity of native Epo (49,50). These drugs will provide better outcome in ongoing clinical trials. The advantage of using nonhematopoietic Epo analogs is to avoid the stimulation of hematopoiesis and thereby the prevention of an increased hematocrit with a subsequent procoagulant status or increased blood pressure. In this regard, a new study by van Rijt et al. has shed new light on this topic (51). A new nonhematopoietic EpoR agonist analog named ARA 290 has been developed, promising cytoprotective capacities to prevent renal ischemia/reperfusion injury (51). ARA 290 is a short peptide that has shown no safety concerns in preclinical and human studies. In addition, ARA 290 has proven efficacious in cardiac disorders (52,53), neuropathic pain (54) and sarcoidosis-induced chronic neuropathic pain (55). Thus, ARA 290 is a novel nonhematopoietic EpoR agonist with promising therapeutic options in treating a wide range of pathologies and without increased risks of cardiovascular events.Overall, this new generation of EpoR agonists without the erythropoietic activity of Epo while preserving tissue-protective properties of Epo will provide better outcomes in ongoing clinical trials (49,50). Nonhematopoietic EpoR agonists represent safer and more effective surrogates for the treatment of several diseases, such as brain and peripheral nerve injury, diabetic complications, renal ischemia, rare diseases, myocardial infarction, chronic heart disease and others.  相似文献   

20.

Background:

Recent warnings from Health Canada regarding codeine for children have led to increased use of nonsteroidal anti-inflammatory drugs and morphine for common injuries such as fractures. Our objective was to determine whether morphine administered orally has superior efficacy to ibuprofen in fracture-related pain.

Methods:

We used a parallel group, randomized, blinded superiority design. Children who presented to the emergency department with an uncomplicated extremity fracture were randomly assigned to receive either morphine (0.5 mg/kg orally) or ibuprofen (10 mg/kg) for 24 hours after discharge. Our primary outcome was the change in pain score using the Faces Pain Scale — Revised (FPS-R). Participants were asked to record pain scores immediately before and 30 minutes after receiving each dose.

Results:

We analyzed data from 66 participants in the morphine group and 68 participants in the ibuprofen group. For both morphine and ibuprofen, we found a reduction in pain scores (mean pre–post difference ± standard deviation for dose 1: morphine 1.5 ± 1.2, ibuprofen 1.3 ± 1.0, between-group difference [δ] 0.2 [95% confidence interval (CI) −0.2 to 0.6]; dose 2: morphine 1.3 ± 1.3, ibuprofen 1.3 ± 0.9, δ 0 [95% CI −0.4 to 0.4]; dose 3: morphine 1.3 ± 1.4, ibuprofen 1.4 ± 1.1, δ −0.1 [95% CI −0.7 to 0.4]; and dose 4: morphine 1.5 ± 1.4, ibuprofen 1.1 ± 1.2, δ 0.4 [95% CI −0.2 to 1.1]). We found no significant differences in the change in pain scores between morphine and ibuprofen between groups at any of the 4 time points (p = 0.6). Participants in the morphine group had significantly more adverse effects than those in the ibuprofen group (56.1% v. 30.9%, p < 0.01).

Interpretation:

We found no significant difference in analgesic efficacy between orally administered morphine and ibuprofen. However, morphine was associated with a significantly greater number of adverse effects. Our results suggest that ibuprofen remains safe and effective for outpatient pain management in children with uncomplicated fractures. Trial registration: ClinicalTrials.gov, no. NCT01690780.There is ample evidence that analgesia is underused,1 underprescribed,2 delayed in its administration2 and suboptimally dosed 3 in clinical settings. Children are particularly susceptible to suboptimal pain management4 and are less likely to receive opioid analgesia.5 Untreated pain in childhood has been reported to lead to short-term problems such as slower healing6 and to long-term issues such as anxiety, needle phobia,7 hyperesthesia8 and fear of medical care.9 The American Academy of Pediatrics has reaffirmed its advocacy for the appropriate use of analgesia for children with acute pain.10Fractures constitute between 10% and 25% of all injuries.11 The most severe pain after an injury occurs within the first 48 hours, with more than 80% of children showing compromise in at least 1 functional area.12 Low rates of analgesia have been reported after discharge from hospital.13 A recently improved understanding of the pharmacogenomics of codeine has raised significant concerns about its safety,14,15 and has led to a Food and Drug Administration boxed warning16 and a Health Canada advisory17 against its use. Although ibuprofen has been cited as the most common agent used by caregivers to treat musculoskeletal pain,12,13 there are concerns that its use as monotherapy may lead to inadequate pain management.6,18 Evidence suggests that orally administered morphine13 and other opioids are increasingly being prescribed.19 However, evidence for the oral administration of morphine in acute pain management is limited.20,21 Thus, additional studies are needed to address this gap in knowledge and provide a scientific basis for outpatient analgesic choices in children. Our objective was to assess if orally administered morphine is superior to ibuprofen in relieving pain in children with nonoperative fractures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号