首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Background

Higher intake of monounsaturated fat may raise high-density lipoprotein (HDL) cholesterol without raising low-density lipoprotein (LDL) cholesterol. We tested whether increasing the monounsaturated fat content of a diet proven effective for lowering LDL cholesterol (dietary portfolio) also modified other risk factors for cardiovascular disease, specifically by increasing HDL cholesterol, lowering serum triglyceride and further reducing the ratio of total to HDL cholesterol.

Methods

Twenty-four patients with hyperlipidemia consumed a therapeutic diet very low in saturated fat for one month and were then randomly assigned to a dietary portfolio low or high in monounsaturated fatty acid for another month. We supplied participants’ food for the two-month period. Calorie intake was based on Harris–Benedict estimates for energy requirements.

Results

For patients who consumed the dietary portfolio high in monounsaturated fat, HDL cholesterol rose, whereas for those consuming the dietary portfolio low in monounsaturated fat, HDL cholesterol did not change. The 12.5% treatment difference was significant (0.12 mmol/L, 95% confidence interval [CI] 0.05 to 0.21, p = 0.003). The ratio of total to HDL cholesterol was reduced by 6.5% with the diet high in monounsaturated fat relative to the diet low in monounsaturated fat (−0.28, 95% CI −0.59 to −0.04, p = 0.025). Patients consuming the diet high in monounsaturated fat also had significantly higher concentrations of apolipoprotein AI, and their C-reactive protein was significantly lower. No treatment differences were seen for triglycerides, other lipids or body weight, and mean weight loss was similar for the diets high in monounsaturated fat (−0.8 kg) and low in monounsaturated fat (−1.2 kg).

Interpretation

Monounsaturated fat increased the effectiveness of a cholesterol-lowering dietary portfolio, despite statin-like reductions in LDL cholesterol. The potential benefits for cardiovascular risk were achieved through increases in HDL cholesterol, further reductions in the ratio of total to HDL cholesterol and reductions in C-reactive protein. (ClinicalTrials.gov trial register no. NCT00430430.)Strategies that combine cholesterol-lowering foods or food components such as viscous fibres and plant sterols have been recommended to enhance the effectiveness of therapeutic diets low in saturated fat and cholesterol.1,2 Such dietary combinations (known as dietary portfolios) have resulted in substantial reductions in low-density lipoprotein (LDL) cholesterol3 and its apolipoprotein (apolipoprotein B), but the beneficial effects on high-density lipoprotein (HDL) cholesterol and its apolipoprotein (apolipoprotein AI) have been less apparent.4 Low concentrations of HDL cholesterol and apolipoprotein AI in the plasma and an elevated ratio of total to HDL cholesterol are recognized risk factors for cardiovascular disease.59 Thus, dietary strategies that both lower total and LDL cholesterol and raise HDL cholesterol should have broad application. One method for increasing HDL cholesterol appears to be the use of monounsaturated fat, a key constituent of the Mediterranean diet, particularly when monounsaturated fat replaces dietary carbohydrates.10,11 Furthermore, increased intake of monounsaturated fat, through increased consumption of nuts and vegetable oil, has been associated with a reduced incidence of cardiovascular disease in cohort studies.12,13We compared the effect on serum lipids of substituting 13.0% of total calories as carbohydrate with monounsaturated fatty acid in a dietary portfolio that has previously been shown, under controlled conditions, to be effective in lowering LDL cholesterol (by 28%) and the ratio of total to HDL cholesterol (by 24%).3 These reductions are similar to those seen with lovastatin 20 mg/d taken with the control diet.3  相似文献   

2.

Background

Minimally angulated fractures of the distal radius are common in children and have excellent outcomes. We conducted a randomized controlled trial to determine whether the use of a prefabricated splint is as effective as a cast in the recovery of physical function.

Methods

We included 96 children 5 to 12 years of age who were treated for a minimally angulated (≤ 15°) greenstick or transverse fracture of the wrist between April 2007 and September 2009 at a tertiary care pediatric hospital. Participants were randomly assigned to receive either a prefabricated wrist splint or a short arm cast for four weeks. The primary outcome was physical function at six weeks, measured using the performance version of the Activities Scale for Kids. Additional outcomes included the degree of angulation, range of motion, grip strength and complications.

Results

Of the 96 children, 46 received a splint and 50 a cast. The mean Activities Scale for Kids score at six weeks was 92.8 in the splint group and 91.4 in the cast group (difference 1.44, 95% confidence interval [CI] −1.75 to 4.62). Thus, the null hypothesis that the splint is less effective by at least seven points was rejected. The between-group difference in angulation at four weeks was not statistically significant (9.85° in the splint group and 8.20° in the cast group; mean difference 1.65°, 95% CI −1.82° to 5.11°), nor was the between-group differences in range of motion, grip strength and complications.

Interpretation

In children with minimally angulated fractures of the distal radius, use of a splint was as effective as a cast with respect to the recovery of physical function. In addition, the devices were comparable in terms of the maintenance of fracture stability and the occurrence of complications. (ClinicalTrials.gov trial register no. NCT00610220.)Fractures of the distal radius are the most common fracture in childhood1 and a frequent reason for visits to the emergency department.2 Although such fractures are often angulated at the time of injury, physicians often accept those with minimal angulation (≤ 15°) because of the unique capacity of skeletally immature bones in children to heal through remodelling.35 These minimally angulated fractures generally do not require reduction, have an excellent long-term prognosis and rarely result in complications such as malunion or deformity.3,5,6The mainstay of treatment for these fractures has been the use of a short arm cast for four to six weeks and several follow-up visits to an orthopedic surgeon.3,5 However, a cast complicates hygiene for a child, and there may be risks that result from a poor fit.7 The noise from a cast saw and fear of its use, as well as discomfort of the cast are among the most common negative aspects from a child’s perspective.810 Finally, there is the need for specialized resources for application and removal of the cast. Preliminary evidence from studies involving adults11,12 and studies of stable buckle fractures of the distal radius1316 suggest that splinting offers a safe alternative. However, this approach needs to be compared with the traditional use of casting in children who have minimally angulated and potentially unstable fractures of the distal radius before it can be recommended for clinical practice.We conducted a noninferiority randomized controlled trial to determine whether a prefabricated wrist splint was as effective as routine casting in the recovery of physical function at six weeks in children who had a minimally angulated green-stick or transverse fracture of the distal radius. We also compared fracture angulation, range of motion, grip strength, complications and level of satisfaction.  相似文献   

3.
Background:Rates of imaging for low-back pain are high and are associated with increased health care costs and radiation exposure as well as potentially poorer patient outcomes. We conducted a systematic review to investigate the effectiveness of interventions aimed at reducing the use of imaging for low-back pain.Methods:We searched MEDLINE, Embase, CINAHL and the Cochrane Central Register of Controlled Trials from the earliest records to June 23, 2014. We included randomized controlled trials, controlled clinical trials and interrupted time series studies that assessed interventions designed to reduce the use of imaging in any clinical setting, including primary, emergency and specialist care. Two independent reviewers extracted data and assessed risk of bias. We used raw data on imaging rates to calculate summary statistics. Study heterogeneity prevented meta-analysis.Results:A total of 8500 records were identified through the literature search. Of the 54 potentially eligible studies reviewed in full, 7 were included in our review. Clinical decision support involving a modified referral form in a hospital setting reduced imaging by 36.8% (95% confidence interval [CI] 33.2% to 40.5%). Targeted reminders to primary care physicians of appropriate indications for imaging reduced referrals for imaging by 22.5% (95% CI 8.4% to 36.8%). Interventions that used practitioner audits and feedback, practitioner education or guideline dissemination did not significantly reduce imaging rates. Lack of power within some of the included studies resulted in lack of statistical significance despite potentially clinically important effects.Interpretation:Clinical decision support in a hospital setting and targeted reminders to primary care doctors were effective interventions in reducing the use of imaging for low-back pain. These are potentially low-cost interventions that would substantially decrease medical expenditures associated with the management of low-back pain.Current evidence-based clinical practice guidelines recommend against the routine use of imaging in patients presenting with low-back pain.13 Despite this, imaging rates remain high,4,5 which indicates poor concordance with these guidelines.6,7Unnecessary imaging for low-back pain has been associated with poorer patient outcomes, increased radiation exposure and higher health care costs.8 No short- or long-term clinical benefits have been shown with routine imaging of the low back, and the diagnostic value of incidental imaging findings remains uncertain.912 A 2008 systematic review found that imaging accounted for 7% of direct costs associated with low-back pain, which in 1998 translated to more than US$6 billion in the United States and £114 million in the United Kingdom.13 Current costs are likely to be substantially higher, with an estimated 65% increase in spine-related expenditures between 1997 and 2005.14Various interventions have been tried for reducing imaging rates among people with low-back pain. These include strategies targeted at the practitioner such as guideline dissemination,1517 education workshops,18,19 audit and feedback of imaging use,7,20,21 ongoing reminders7 and clinical decision support.2224 It is unclear which, if any, of these strategies are effective.25 We conducted a systematic review to investigate the effectiveness of interventions designed to reduce imaging rates for the management of low-back pain.  相似文献   

4.

Background

The Canadian CT Head Rule was developed to allow physicians to be more selective when ordering computed tomography (CT) imaging for patients with minor head injury. We sought to evaluate the effectiveness of implementing this validated decision rule at multiple emergency departments.

Methods

We conducted a matched-pair cluster-randomized trial that compared the outcomes of 4531 patients with minor head injury during two 12-month periods (before and after) at hospital emergency departments in Canada, six of which were randomly allocated as intervention sites and six as control sites. At the intervention sites, active strategies, including education, changes to policy and real-time reminders on radiologic requisitions were used to implement the Canadian CT Head Rule. The main outcome measure was referral for CT scan of the head.

Results

Baseline characteristics of patients were similar when comparing control to intervention sites. At the intervention sites, the proportion of patients referred for CT imaging increased from the “before” period (62.8%) to the “after” period (76.2%) (difference +13.3%, 95% CI 9.7%–17.0%). At the control sites, the proportion of CT imaging usage also increased, from 67.5% to 74.1% (difference +6.7%, 95% CI 2.6%–10.8%). The change in mean imaging rates from the “before” period to the “after” period for intervention versus control hospitals was not significant (p = 0.16). There were no missed brain injuries or adverse outcomes.

Interpretation

Our knowledge–translation-based trial of the Canadian CT Head Rule did not reduce rates of CT imaging in Canadian emergency departments. Future studies should identify strategies to deal with barriers to implementation of this decision rule and explore more effective approaches to knowledge translation. (ClinicalTrials.gov trial register no. NCT00993252)More than six million instances of head and neck trauma are seen annually in emergency departments in Canada and the United States.1 Most are classified as minimal or minor head injury, but in a very small proportion, deterioration occurs and neurosurgical intervention is needed for intracranial hematoma.2,3 In recent years, North American use of computed tomography (CT) for many conditions in the emergency department, including minor head injury, has increased five-fold.1,4 Our own Canadian data showed marked variation in the use of CT for similar patients.5 Over 90% of CT scans are negative for clinically important brain injury.68 Owing to its high volume of usage, such imaging adds to health care costs. There have also been increasing concerns about radiation-related risk from unnecessary CT scans.9,10 Additionally, unnecessary use of CT scanning compounds the Canadian problems of overcrowding of emergency departments and inadequate access to advanced imaging for nonemergency outpatients.Clinical decision rules are derived from original research and may be defined as tools for clinical decision-making that incorporate three or more variables from a patient’s history, physical examination or simple tests.1113 The Canadian CT Head Rule comprises five high-risk and two medium-risk criteria and was derived by prospectively evaluating 3121 adults with minor head injury (Figure 1) (Appendix 1, available at www.cmaj.ca/cgi/content/full/cmaj.091974/DC1).6 The resultant decision rule was then prospectively validated in a group of 2707 patients and showed high sensitivity (100%; 95% confidence interval [CI ] 91–100) and reliability.14 The results of its validation suggested that, in patients presenting to emergency departments with minor head trauma, a rate of usage of CT imaging as low as 62.4% was possible and safe.Open in a separate windowFigure 1The Canadian CT Head Rule, as used in the study. Note: CSF = cerebrospinal fluid, CT = computed tomography, GCS = Glasgow Coma Scale.Unfortunately, most decision rules are never used after derivation because they are not adequately tested in validation or implementation studies.1519 We recently successfully implemented a similar rule, the Canadian C-Spine Rule, at multiple Canadian sites.20 Hence, the goal of the current study was to evaluate the effectiveness and safety of an active strategy to implement the Canadian CT Head Rule at multiple emergency departments. We wanted to test both the impact of the rule on rates of CT imaging and the effectiveness of an inexpensive and easily adopted implementation strategy. In addition, we wanted to further evaluate the accuracy of the rule.  相似文献   

5.

Background

Little is known about the incidence and causes of heparin-induced skin lesions. The 2 most commonly reported causes of heparin-induced skin lesions are immune-mediated heparin-induced thrombocytopenia and delayed-type hypersensitivity reactions.

Methods

We prospectively examined consecutive patients who received subcutaneous heparin (most often enoxaparin or nadroparin) for the presence of heparin-induced skin lesions. If such lesions were identified, we performed a skin biopsy, platelet count measurements, and antiplatelet-factor 4 antibody and allergy testing.

Results

We enrolled 320 patients. In total, 24 patients (7.5%, 95% confidence interval [CI] 4.7%–10.6%) had heparin-induced skin lesions. Delayed-type hypersensitivity reactions were identified as the cause in all 24 patients. One patient with histopathologic evidence of delayed-type hypersensitivity tested positive for antiplatelet-factor 4 antibodies. We identified the following risk factors for heparin-induced skin lesions: a body mass index greater than 25 (odds ratio [OR] 4.6, 95% CI 1.7–15.3), duration of heparin therapy longer than 9 days (OR 5.9, 95% CI 1.9–26.3) and female sex (OR 3.0, 95% CI 1.1–8.8).

Interpretation

Heparin-induced skin lesions are relatively common, have identifiable risk factors and are commonly caused by a delayed-type hypersensitivity reaction (type IV allergic response). (ClinicalTrials.gov trial register no. NCT00510432.)Hpeparin has been used as an anticoagulant for over 60 years.1 Well-known adverse effects of heparin therapy are bleeding, osteoporosis, hair loss, and immune and nonimmune heparin-induced thrombocytopenia. The incidence of heparin-induced skin lesions is unknown, despite being increasingly reported.24 Heparin-induced skin lesions may be caused by at least 5 mechanisms: delayed-type (type IV) hypersensitivity responses,2,46 immune-mediated thrombocytopenia,3 type I allergic reactions,7,8 skin necrosis9 and pustulosis.10Heparin-induced skin lesions may indicate the presence of life-threatening heparin-induced thrombocytopenia11 — even in the absence of thrombocytopenia.3 There are no data available on the incidence of heparin-induced skin lesions or their causes. Given the rising number of reports of heparin-induced skin lesions and the importance of correctly diagnosing this condition, we sought to determine the incidence of heparin-induced skin lesions.  相似文献   

6.
《CMAJ》2015,187(8):E243-E252
Background:We aimed to prospectively validate a novel 1-hour algorithm using high-sensitivity cardiac troponin T measurement for early rule-out and rule-in of acute myocardial infarction (MI).Methods:In a multicentre study, we enrolled 1320 patients presenting to the emergency department with suspected acute MI. The high-sensitivity cardiac troponin T 1-hour algorithm, incorporating baseline values as well as absolute changes within the first hour, was validated against the final diagnosis. The final diagnosis was then adjudicated by 2 independent cardiologists using all available information, including coronary angiography, echocardiography, follow-up data and serial measurements of high-sensitivity cardiac troponin T levels.Results:Acute MI was the final diagnosis in 17.3% of patients. With application of the high-sensitivity cardiac troponin T 1-hour algorithm, 786 (59.5%) patients were classified as “rule-out,” 216 (16.4%) were classified as “rule-in” and 318 (24.1%) were classified to the “observational zone.” The sensitivity and the negative predictive value for acute MI in the rule-out zone were 99.6% (95% confidence interval [CI] 97.6%–99.9%) and 99.9% (95% CI 99.3%–100%), respectively. The specificity and the positive predictive value for acute MI in the rule-in zone were 95.7% (95% CI 94.3%–96.8%) and 78.2% (95% CI 72.1%–83.6%), respectively. The 1-hour algorithm provided higher negative and positive predictive values than the standard interpretation of highsensitivity cardiac troponin T using a single cut-off level (both p < 0.05). Cumulative 30-day mortality was 0.0%, 1.6% and 1.9% in patients classified in the rule-out, observational and rule-in groups, respectively (p = 0.001).Interpretation:This rapid strategy incorporating high-sensitivity cardiac troponin T baseline values and absolute changes within the first hour substantially accelerated the management of suspected acute MI by allowing safe rule-out as well as accurate rule-in of acute MI in 3 out of 4 patients. Trial registration: ClinicalTrials.gov, NCT00470587Acute myocardial infarction (MI) is a major cause of death and disability worldwide. As highly effective treatments are available, early and accurate detection of acute MI is crucial.15 Clinical assessment, 12-lead electrocardiography (ECG) and measurement of cardiac troponin levels form the pillars for the early diagnosis of acute MI in the emergency department. Major advances have recently been achieved by the development of more sensitive cardiac troponin assays.615 High-sensitivity cardiac troponin assays, which allow measurement of even low concentrations of cardiac troponin with high precision, have been shown to largely overcome the sensitivity deficit of conventional cardiac troponin assays within the first hours of presentation in the diagnosis of acute MI.615 These studies have consistently shown that the classic diagnostic interpretation of cardiac troponin as a dichotomous variable (troponin-negative and troponin-positive) no longer seems appropriate, because the positive predictive value for acute MI of being troponin-positive was only about 50%.615 The best way to interpret and clinically use high-sensitivity cardiac troponin levels in the early diagnosis of acute MI is still debated.3,5,7In a pilot study, a novel high-sensitivity cardiac troponin T 1-hour algorithm was shown to allow accurate rule-out and rule-in of acute MI within 1 hour in up to 75% of patients.11 This algorithm is based on 2 concepts. First, high-sensitivity cardiac troponin T is interpreted as a quantitative variable where the proportion of patients who have acute MI increases with increasing concentrations of cardiac troponin T.615 Second, early absolute changes in the concentrations within 1 hour provide incremental diagnostic information when added to baseline levels, with the combination acting as a reliable surrogate for late concentrations at 3 or 6 hours.615 However, many experts remained skeptical regarding the safety of the high-sensitivity cardiac troponin T 1-hour algorithm and its wider applicability.16 Accordingly, this novel diagnostic concept has not been adopted clinically to date. Because the clinical application of this algorithm would represent a profound change in clinical practice, prospective validation in a large cohort is mandatory before it can be considered for routine clinical use. The aim of this multicentre study was to prospectively validate the high-sensitivity cardiac troponin T 1-hour algorithm in a large independent cohort.  相似文献   

7.

Background:

Evidence from controlled trials encourages the intake of dietary pulses (beans, chickpeas, lentils and peas) as a method of improving dyslipidemia, but heart health guidelines have stopped short of ascribing specific benefits to this type of intervention or have graded the beneficial evidence as low. We conducted a systematic review and meta-analysis of randomized controlled trials (RCTs) to assess the effect of dietary pulse intake on established therapeutic lipid targets for cardiovascular risk reduction.

Methods:

We searched electronic databases and bibliographies of selected trials for relevant articles published through Feb. 5, 2014. We included RCTs of at least 3 weeks’ duration that compared a diet emphasizing dietary pulse intake with an isocaloric diet that did not include dietary pulses. The lipid targets investigated were low-density lipoprotein (LDL) cholesterol, apolipoprotein B and non–high-density lipoprotein (non-HDL) cholesterol. We pooled data using a random-effects model.

Results:

We identified 26 RCTs (n = 1037) that satisfied the inclusion criteria. Diets emphasizing dietary pulse intake at a median dose of 130 g/d (about 1 serving daily) significantly lowered LDL cholesterol levels compared with the control diets (mean difference −0.17 mmol/L, 95% confidence interval −0.25 to −0.09 mmol/L). Treatment effects on apolipoprotein B and non-HDL cholesterol were not observed.

Interpretation:

Our findings suggest that dietary pulse intake significantly reduces LDL cholesterol levels. Trials of longer duration and higher quality are needed to verify these results. Trial registration: ClinicalTrials.gov, no. NCT01594567.Abnormal blood concentrations of lipids are one of the most important modifiable risk factors for cardiovascular disease. Although statins are effective in reducing low-density lipoprotein (LDL) cholesterol levels, major health organizations have maintained that the initial and essential approach to the prevention and management of cardiovascular disease is to modify dietary and lifestyle patterns.14Dietary non–oil-seed pulses (beans, chickpeas, lentils and peas) are foods that have received particular attention for their ability to reduce the risk of cardiovascular disease. Consumption of dietary pulses was associated with a reduction in cardiovascular disease in a large observational study5 and with improvements in LDL cholesterol levels in small trials.68 Although most guidelines on the prevention of major chronic diseases encourage the consumption of dietary pulses as part of a healthy strategy,2,3,913 none has included recommendations based on the direct benefits of lowering lipid concentrations or reducing the risk of cardiovascular disease. In all cases, the evidence on which recommendations have been based was assigned a low grade,2,3,913 and dyslipidemia guidelines do not address dietary pulse intake directly.1,4To improve the evidence on which dietary guidelines are based, we conducted a systematic review and meta-analysis of randomized controlled trials (RCTs) of the effect of dietary pulse intake on established therapeutic lipid targets for cardiovascular risk reduction. The lipid targets were LDL cholesterol, apolipoprotein B and non–high-density lipoprotein (non-HDL) cholesterol.  相似文献   

8.

Background:

Modifiable behaviours during early childhood may provide opportunities to prevent disease processes before adverse outcomes occur. Our objective was to determine whether young children’s eating behaviours were associated with increased risk of cardiovascular disease in later life.

Methods:

In this cross-sectional study involving children aged 3–5 years recruited from 7 primary care practices in Toronto, Ontario, we assessed the relation between eating behaviours as assessed by the NutriSTEP (Nutritional Screening Tool for Every Preschooler) questionnaire (completed by parents) and serum levels of non–high-density lipoprotein (HDL) cholesterol, a surrogate marker of cardiovascular risk. We also assessed the relation between dietary intake and serum non-HDL cholesterol, and between eating behaviours and other laboratory indices of cardiovascular risk (low-density lipoprotein [LDL] cholesterol, apolipoprotein B, HDL cholesterol and apoliprotein A1).

Results:

A total of 1856 children were recruited from primary care practices in Toronto. Of these children, we included 1076 in our study for whom complete data and blood samples were available for analysis. The eating behaviours subscore of the NutriSTEP tool was significantly associated with serum non-HDL cholesterol (p = 0.03); for each unit increase in the eating behaviours subscore suggesting greater nutritional risk, we saw an increase of 0.02 mmol/L (95% confidence interval [CI] 0.002 to 0.05) in serum non-HDL cholesterol. The eating behaviours subscore was also associated with LDL cholesterol and apolipoprotein B, but not with HDL cholesterol or apolipoprotein A1. The dietary intake subscore was not associated with non-HDL cholesterol.

Interpretation:

Eating behaviours in preschool-aged children are important potentially modifiable determinants of cardiovascular risk and should be a focus for future studies of screening and behavioural interventions.Modifiable behaviours during early childhood may provide opportunities to prevent later chronic diseases, in addition to the behavioural patterns that contribute to them, before adverse outcomes occur. There is evidence that behavioural interventions during early childhood (e.g., ages 3–5 yr) can promote healthy eating.1 For example, repeated exposure to vegetables increases vegetable preference and intake,2 entertaining presentations of fruits (e.g., in the shape of a boat) increase their consumption,3 discussing internal satiety cues with young children reduces snacking,4 serving carrots before the main course (as opposed to with the main course) increases carrot consumption,5 and positive modelling of the consumption of healthy foods increases their intake by young children.6,7 Responsive eating behavioural styles in which children are given access to healthy foods and allowed to determine the timing and pace of eating in response to internal cues with limited distractions, such as those from television, have been recommended by the Institute of Medicine.8Early childhood is a critical period for assessing the origins of cardiometabolic disease and implementing preventive interventions.8 However, identifying behavioural risk factors for cardiovascular disease during early childhood is challenging, because signs of disease can take decades to appear. One emerging surrogate marker for later cardiovascular risk is the serum concentration of non–high-density lipoprotein (HDL) cholesterol (or total cholesterol minus HDL cholesterol).912 The Young Finn Longitudinal Study found an association between non-HDL cholesterol levels during childhood (ages 3–18 yr) and an adult measure of atherosclerosis (carotid artery intima–media thickness), although this relation was not significant for the subgroup of younger female children (ages 3–9 yr).10,11 The Bogalusa Heart Study, which included a subgroup of children aged 2–15 years, found an association between low-density lipoprotein (LDL) cholesterol concentration (which is highly correlated with non-HDL cholesterol) and asymptomatic atherosclerosis at autopsy.12 The American Academy of Pediatrics recommends non-HDL cholesterol concentration as the key measure for screening for cardiovascular risk in children.9 Serum non-HDL cholesterol concentration is the dyslipidemia screening test recommended by the American Academy of Pediatrics for children aged 9–11 years.9 Cardiovascular risk stratification tools such as the Reynold Risk Score (www.reynoldsriskscore.org) and the Framingham Heart Study coronary artery disease 10-year risk calculator (www.framinghamheartstudy.org/risk) for adults do not enable directed interventions when cardiovascular disease processes begin — during childhood.The primary objective of our study was to determine whether eating behaviours at 3–5 years of age, as assessed by the NutriSTEP (Nutritional Screening for Every Preschooler) questionnaire,13,14 are associated with non-HDL cholesterol levels, a surrogate marker of cardiovascular risk. Our secondary objectives were to determine whether other measures of nutritional risk, such as dietary intake, were associated with non-HDL cholesterol levels and whether eating behaviours are associated with other cardiovascular risk factors, such as LDL cholesterol, apolipoprotein B, HDL cholesterol and apoliprotein A1.  相似文献   

9.

Background

We developed and tested a new method, called the Evidence-based Practice for Improving Quality method, for continuous quality improvement.

Methods

We used cluster randomization to assign 6 neonatal intensive care units (ICUs) to reduce nosocomial infection (infection group) and 6 ICUs to reduce bronchopulmonary dysplasia (pulmonary group). We included all infants born at 32 or fewer weeks gestation. We collected baseline data for 1 year. Practice change interventions were implemented using rapid-change cycles for 2 years.

Results

The difference in incidence trends (slopes of trend lines) between the ICUs in the infection and pulmonary groups was − 0.0020 (95% confidence interval [CI] − 0.0007 to 0.0004) for nosocomial infection and − 0.0006 (95% CI − 0.0011 to − 0.0001) for bronchopulmonary dysplasia.

Interpretation

The results suggest that the Evidence-based Practice for Improving Quality method reduced bronchopulmonary dysplasia in the neonatal ICU and that it may reduce nosocomial infection.Although methods for continuous quality improvement have been used to improve outcomes,13 some, such as the National Institutes of Child Health and Human Development Quality Collaborative,4 have reported little or no effect in neonatal intensive care units (ICUs). These methods have been criticized for being based on intuition and anecdotes rather than on evidence.5 To address these concerns, researchers have developed methods aimed at improving the use of evidence in quality improvement. Tarnow-Mordi and colleagues,6 Sankaran and colleagues7 and others810 have used benchmarking instruments6,8,11 to show risk-adjusted variations in outcomes in neonatal ICUs. Synnes and colleagues12 reported that variations in the rates of intraventricular hemorrhage could be attributed to practice differences. MacNab and colleagues13 showed how multilevel modelling methods can be used to identify practice differences associated with variations in outcomes for targeted interventions and to quantify their attributable risks.Building on these results, we developed the Evidence-based Practice for Improving Quality method for continuous quality improvement. This method is based on 3 pillars: the use of evidence from published literature; the use of data from participating hospitals to identify hospital-specific practices for targeted intervention; and the use of a national network to share expertise. By selectively targeting hospital-specific practices for intervention, this method reduces the reliance on intuition and anecdotes that are associated with existing quality-improvement methods.Our objective was to evaluate the efficacy of the Evidence-based Practice for Improving Quality method by conducting a prospective cluster randomized controlled trial to reduce nosocomial infection and bronchopulmonary dysplasia among infants born at 32 or fewer weeks’ gestation and admitted to 12 Canadian Neonatal Network hospitals14 over a 36-month period. We hypothesized that the incidence of nosocomial infection would be reduced among infants in ICUs randomized to reduce infection but not among those in ICUs randomized to reduce bronchopulmonary dysplasia. We also hypothesized that the incidence of bronchopulmonary dysplasia would be reduced among infants in the ICUs randomized to reduce this outcome but not among those in ICUs randomized to reduce infections.  相似文献   

10.
Background:Remote ischemic preconditioning is a simple therapy that may reduce cardiac and kidney injury. We undertook a randomized controlled trial to evaluate the effect of this therapy on markers of heart and kidney injury after cardiac surgery.Methods:Patients at high risk of death within 30 days after cardiac surgery were randomly assigned to undergo remote ischemic preconditioning or a sham procedure after induction of anesthesia. The preconditioning therapy was three 5-minute cycles of thigh ischemia, with 5 minutes of reperfusion between cycles. The sham procedure was identical except that ischemia was not induced. The primary outcome was peak creatine kinase–myocardial band (CK-MB) within 24 hours after surgery (expressed as multiples of the upper limit of normal, with log transformation). The secondary outcome was change in creatinine level within 4 days after surgery (expressed as log-transformed micromoles per litre). Patient-important outcomes were assessed up to 6 months after randomization.Results:We randomly assigned 128 patients to remote ischemic preconditioning and 130 to the sham therapy. There were no significant differences in postoperative CK-MB (absolute mean difference 0.15, 95% confidence interval [CI] −0.07 to 0.36) or creatinine (absolute mean difference 0.06, 95% CI −0.10 to 0.23). Other outcomes did not differ significantly for remote ischemic preconditioning relative to the sham therapy: for myocardial infarction, relative risk (RR) 1.35 (95% CI 0.85 to 2.17); for acute kidney injury, RR 1.10 (95% CI 0.68 to 1.78); for stroke, RR 1.02 (95% CI 0.34 to 3.07); and for death, RR 1.47 (95% CI 0.65 to 3.31).Interpretation:Remote ischemic precnditioning did not reduce myocardial or kidney injury during cardiac surgery. This type of therapy is unlikely to substantially improve patient-important outcomes in cardiac surgery. Trial registration: ClinicalTrials.gov, no. NCT01071265.Each year, 2 million patients worldwide undergo cardiac surgery. For more than 25% of these patients, the surgery is complicated by myocardial infarction (MI) and/or acute kidney injury, both of which are strongly associated with morbidity and mortality.13 Preventing MI and acute kidney injury after cardiac surgery would improve survival.An important cause of MI and acute kidney injury in patients undergoing cardiac surgery is ischemia–reperfusion injury.4,5 This type of injury begins as ischemia, which is then exacerbated by a systemic inflammatory response upon restoration of organ perfusion.6 Remote ischemic preconditioning may mitigate ischemia–reperfusion damage. It is accomplished by inducing, before surgery, brief episodes of ischemia in a limb, which lead to widespread activation of endogenous cellular systems that may protect organs from subsequent severe ischemia and reperfusion.79Small randomized controlled trials evaluating the efficacy of remote ischemic preconditioning have had mixed results.1017 Interpretation of their data is difficult because of small sample sizes and heterogeneity in the preconditioning procedures and patient populations (e.g., few trials have evaluated patients at high risk of organ injury and postoperative death). Whether remote ischemic preconditioning effectively mitigates ischemia–reperfusion injury therefore remains uncertain. We undertook the Remote Ischemic Preconditioning in Cardiac Surgery Trial (Remote IMPACT) to determine whether this procedure reduces myocardial and kidney injury. We proposed that a large trial to determine the effect on clinically important outcomes would be worthwhile only if a substantial effect on myocardial or kidney injury, or both, were observed in the current study.  相似文献   

11.

Background:

The Low Risk Ankle Rule is a validated clinical decision rule that has the potential to safely reduce radiography in children with acute ankle injuries. We performed a phased implementation of the Low Risk Ankle Rule and evaluated its effectiveness in reducing the frequency of radiography in children with ankle injuries.

Methods:

Six Canadian emergency departments participated in the study from Jan. 1, 2009, to Aug. 31, 2011. At the 3 intervention sites, there were 3 consecutive 26-week phases. In phase 1, no interventions were implemented. In phase 2, we activated strategies to implement the ankle rule, including physician education, reminders and a computerized decision support system. In phase 3, we included only the decision support system. No interventions were introduced at the 3 pair-matched control sites. We examined the management of ankle injuries among children aged 3–16 years. The primary outcome was the proportion of children undergoing radiography.

Results:

We enrolled 2151 children with ankle injuries, 1055 at intervention and 1096 at control hospitals. During phase 1, the baseline frequency of pediatric ankle radiography at intervention and control sites was 96.5% and 90.2%, respectively. During phase 2, the frequency of ankle radiography decreased significantly at intervention sites relative to control sites (between-group difference −21.9% [95% confidence interval [CI] −28.6% to −15.2%]), without significant differences in patient or physician satisfaction. All effects were sustained in phase 3. The sensitivity of the Low Risk Ankle Rule during implementation was 100% (95% CI 85.4% to 100%), and the specificity was 53.1% (95% CI 48.1% to 58.1%).

Interpretation:

Implementation of the Low Risk Ankle Rule in several different emergency department settings reduced the rate of pediatric ankle radiography significantly and safely, without an accompanying change in physician or patient satisfaction. Trial registration: ClinicalTrials.gov, no. NCT00785876.Pediatric ankle injuries result in more than 2 million emergency department visits in Canada and the United States each year (Jeanette Tyas, Canadian Institute of Health Information: unpublished data, 2007).1,2 Radiographs are ordered for 85%–95% of these children,3 although only 12% of these reveal a fracture.4 Thus, radiography is unnecessary for most children’s ankle injuries, and these high rates of radiography needlessly expose children to radiation and are a questionable use of resources.The Low Risk Ankle Rule has 100% sensitivity with respect to identifying clinically important pediatric ankle fractures and has the potential to safely reduce imaging by about 60%.4 When the application of the rule suggests that radiography is not needed, it has been shown that any fractures that might be missed are clinically insignificant and can be safely and cost-effectively managed like an ankle sprain, with superior functional recovery.5 Finally, the Low Risk Ankle Rule has been shown to have excellent acceptability when tested on emergency physicians.6The main objective of this study was to implement the ankle rule in several different emergency department settings using a multimodal knowledge translation strategy and to evaluate its impact on the frequency of radiography in children presenting with acute ankle injuries.  相似文献   

12.

Background

The pathogenesis of appendicitis is unclear. We evaluated whether exposure to air pollution was associated with an increased incidence of appendicitis.

Methods

We identified 5191 adults who had been admitted to hospital with appendicitis between Apr. 1, 1999, and Dec. 31, 2006. The air pollutants studied were ozone, nitrogen dioxide, sulfur dioxide, carbon monoxide, and suspended particulate matter of less than 10 μ and less than 2.5 μ in diameter. We estimated the odds of appendicitis relative to short-term increases in concentrations of selected pollutants, alone and in combination, after controlling for temperature and relative humidity as well as the effects of age, sex and season.

Results

An increase in the interquartile range of the 5-day average of ozone was associated with appendicitis (odds ratio [OR] 1.14, 95% confidence interval [CI] 1.03–1.25). In summer (July–August), the effects were most pronounced for ozone (OR 1.32, 95% CI 1.10–1.57), sulfur dioxide (OR 1.30, 95% CI 1.03–1.63), nitrogen dioxide (OR 1.76, 95% CI 1.20–2.58), carbon monoxide (OR 1.35, 95% CI 1.01–1.80) and particulate matter less than 10 μ in diameter (OR 1.20, 95% CI 1.05–1.38). We observed a significant effect of the air pollutants in the summer months among men but not among women (e.g., OR for increase in the 5-day average of nitrogen dioxide 2.05, 95% CI 1.21–3.47, among men and 1.48, 95% CI 0.85–2.59, among women). The double-pollutant model of exposure to ozone and nitrogen dioxide in the summer months was associated with attenuation of the effects of ozone (OR 1.22, 95% CI 1.01–1.48) and nitrogen dioxide (OR 1.48, 95% CI 0.97–2.24).

Interpretation

Our findings suggest that some cases of appendicitis may be triggered by short-term exposure to air pollution. If these findings are confirmed, measures to improve air quality may help to decrease rates of appendicitis.Appendicitis was introduced into the medical vernacular in 1886.1 Since then, the prevailing theory of its pathogenesis implicated an obstruction of the appendiceal orifice by a fecalith or lymphoid hyperplasia.2 However, this notion does not completely account for variations in incidence observed by age,3,4 sex,3,4 ethnic background,3,4 family history,5 temporal–spatial clustering6 and seasonality,3,4 nor does it completely explain the trends in incidence of appendicitis in developed and developing nations.3,7,8The incidence of appendicitis increased dramatically in industrialized nations in the 19th century and in the early part of the 20th century.1 Without explanation, it decreased in the middle and latter part of the 20th century.3 The decrease coincided with legislation to improve air quality. For example, after the United States Clean Air Act was passed in 1970,9 the incidence of appendicitis decreased by 14.6% from 1970 to 1984.3 Likewise, a 36% drop in incidence was reported in the United Kingdom between 1975 and 199410 after legislation was passed in 1956 and 1968 to improve air quality and in the 1970s to control industrial sources of air pollution. Furthermore, appendicitis is less common in developing nations; however, as these countries become more industrialized, the incidence of appendicitis has been increasing.7Air pollution is known to be a risk factor for multiple conditions, to exacerbate disease states and to increase all-cause mortality.11 It has a direct effect on pulmonary diseases such as asthma11 and on nonpulmonary diseases including myocardial infarction, stroke and cancer.1113 Inflammation induced by exposure to air pollution contributes to some adverse health effects.1417 Similar to the effects of air pollution, a proinflammatory response has been associated with appendicitis.1820We conducted a case–crossover study involving a population-based cohort of patients admitted to hospital with appendicitis to determine whether short-term increases in concentrations of selected air pollutants were associated with hospital admission because of appendicitis.  相似文献   

13.

Background:

A link between obstructive sleep apnea and cancer development or progression has been suggested, possibly through chronic hypoxemia, but supporting evidence is limited. We examined the association between the severity of obstructive sleep apnea and prevalent and incident cancer, controlling for known risk factors for cancer development.

Methods:

We included all adults referred with possible obstructive sleep apnea who underwent a first diagnostic sleep study at a single large academic hospital between 1994 and 2010. We linked patient data with data from Ontario health administrative databases from 1991 to 2013. Cancer diagnosis was derived from the Ontario Cancer Registry. We assessed the cross-sectional association between obstructive sleep apnea and prevalent cancer at the time of the sleep study (baseline) using logistic regression analysis. Cox regression models were used to investigate the association between obstructive sleep apnea and incident cancer among patients free of cancer at baseline.

Results:

Of 10 149 patients who underwent a sleep study, 520 (5.1%) had a cancer diagnosis at baseline. Over a median follow-up of 7.8 years, 627 (6.5%) of the 9629 patients who were free of cancer at baseline had incident cancer. In multivariable regression models, the severity of sleep apnea was not significantly associated with either prevalent or incident cancer after adjustment for age, sex, body mass index and smoking status at baseline (apnea–hypopnea index > 30 v. < 5: adjusted odds ratio [OR] 0.96, 95% confidence interval [CI] 0.71–1.30, for prevalent cancer, and adjusted hazard ratio [HR] 1.02, 95% CI 0.80–1.31, for incident cancer; sleep time spent with oxygen saturation < 90%, per 10-minute increase: adjusted OR 1.01, 95% CI 1.00–1.03, for prevalent cancer, and adjusted HR 1.00, 95% CI 0.99–1.02, for incident cancer).

Interpretation:

In a large cohort, the severity of obstructive sleep apnea was not independently associated with either prevalent or incident cancer. Additional studies are needed to elucidate whether there is an independent association with specific types of cancer.Obstructive sleep apnea is a sleep-related breathing disorder characterized by repetitive episodes of upper-airway obstruction during sleep. Through sleep fragmentation, hypoxemia, hypercapnia, swings in intrathoracic pressure and increased sympathetic activity, these episodes lead to symptoms and health consequences.1 In 2009, 23% of Canadian adults reported risk factors for obstructive sleep apnea, and 5% of the population 45 years and older reported being told by a health professional that they had the condition.2Obstructive sleep apnea has been postulated to cause cancer3,4 or cancer progression,5 possibly through chronic intermittent hypoxemia,6 thus making it a potential modifiable risk factor for cancer development.7 However, the longitudinal evidence on this association is limited. Four cohort studies evaluated the longitudinal association between obstructive sleep apnea (expressed by the apnea–hypopnea index, oxygen desaturation or symptoms) and cancer development or cancer-related mortality (Appendix 1, available at www.cmaj.ca/lookup/suppl/doi:10.1503/cmaj.140238/-/DC1).35,8 All had limitations. Of the 3 that reported a positive association,3,5,8 2 studies included a small number of participants with severe obstructive sleep apnea, had a relatively small number of events and did not consider competing risk of death from other causes;5,8 and 2 used less reliable sleep-testing devices to define obstructive sleep apnea,3,8 which may have introduced measurement bias. In the only study that did not show an association between obstructive sleep apnea and cancer,4 the former was diagnosed on the basis of self-reported symptoms, which could have resulted in misclassification of exposure.There is a need for a sufficiently large cohort study with a long enough follow-up to allow for the potential development of cancer that adjusts for important potential confounders, examines common cancer subtypes and has a rigorous assessment of both obstructive sleep apnea and cancer.7,9,10 Our study was designed to improve upon the methods of published studies. We examined the association between the severity of obstructive sleep apnea (expressed by the apnea–hypopnea index or oxygen desaturation) and prevalent or incident cancer after controlling for known cancer risk factors.  相似文献   

14.

Background:

Disturbance of the sleep–wake cycle is a characteristic of delirium. In addition, changes in melatonin rhythm influence the circadian rhythm and are associated with delirium. We compared the effect of melatonin and placebo on the incidence and duration of delirium.

Methods:

We performed this multicentre, double-blind, randomized controlled trial between November 2008 and May 2012 in 1 academic and 2 nonacademic hospitals. Patients aged 65 years or older who were scheduled for acute hip surgery were eligible for inclusion. Patients received melatonin 3 mg or placebo in the evening for 5 consecutive days, starting within 24 hours after admission. The primary outcome was incidence of delirium within 8 days of admission. We also monitored the duration of delirium.

Results:

A total of 452 patients were randomly assigned to the 2 study groups. We subsequently excluded 74 patients for whom the primary end point could not be measured or who had delirium before the second day of the study. After these postrandomization exclusions, data for 378 patients were included in the main analysis. The overall mean age was 84 years, 238 (63.0%) of the patients lived at home before admission, and 210 (55.6%) had cognitive impairment. We observed no effect of melatonin on the incidence of delirium: 55/186 (29.6%) in the melatonin group v. 49/192 (25.5%) in the placebo group; difference 4.1 (95% confidence interval −0.05 to 13.1) percentage points. There were no between-group differences in mortality or in cognitive or functional outcomes at 3-month follow-up.

Interpretation:

In this older population with hip fracture, treatment with melatonin did not reduce the incidence of delirium. Trial registration: Netherlands Trial Registry, NTR1576: MAPLE (Melatonin Against PLacebo in Elderly patients) study; www.trialregister.nl/trialreg/admin/rctview.asp?TC=1576Delirium in older inpatients is associated with a high risk of dementia and other complications that translate into increased mortality and health care costs.1,2 The antipsychotic haloperidol has historically been the agent of choice for treating delirium, and it has increasingly been administered as a prophylactic for delirium or to reduce symptoms such as hallucinations and aggressive behaviour.3,4 However, all antipsychotic treatments may induce serious cerebrovascular adverse effects and greater mortality, particularly among patients with dementia.5,6 These effects led the US Food and Drug Administration to issue a serious warning against their use.7 In addition, benzodiazepines are still frequently used to treat delirium, despite their being known to elicit or aggravate delirium.8,9Disturbances of the circadian sleep–wake cycle represent one of the core features of delirium,10 leading to the hypothesis that the neurotransmitter melatonin and changes in its metabolism may be involved in the pathogenesis of delirium.11,12 Objective measurements have shown that melatonin metabolism is disturbed after abdominal and other types of surgery, insomnia, sleep deprivation and stays in the intensive care unit (ICU), all of which are also known to be factors that contribute to delirium.1316 These characteristics suggest an association between melatonin abnormalities and delirium.1722 Although proof of a causal relation is still lacking, inpatients might nevertheless benefit from melatonin supplementation therapy through postoperative maintenance or restoration of their sleep–wake cycle.2325 Although melatonin depletion is thought to be one of the mechanisms of delirium, few studies have investigated the effects of altering perioperative plasma concentrations of melatonin, in particular, the possible effects on postoperative delirium.The primary objective of this study was to assess the effects of melatonin on the incidence of delirium among elderly patients admitted to hospital as an emergency following hip fracture. Secondary outcomes were duration and severity of delirium, length of hospital stay, total doses of haloperidol and benzodiazepines administered to patients with delirium, mortality during the hospital stay, and functional status, cognitive function and mortality at 3-month follow-up.  相似文献   

15.

Background

Patients exposed to low-dose ionizing radiation from cardiac imaging and therapeutic procedures after acute myocardial infarction may be at increased risk of cancer.

Methods

Using an administrative database, we selected a cohort of patients who had an acute myocardial infarction between April 1996 and March 2006 and no history of cancer. We documented all cardiac imaging and therapeutic procedures involving low-dose ionizing radiation. The primary outcome was risk of cancer. Statistical analyses were performed using a time-dependent Cox model adjusted for age, sex and exposure to low-dose ionizing radiation from noncardiac imaging to account for work-up of cancer.

Results

Of the 82 861 patients included in the cohort, 77% underwent at least one cardiac imaging or therapeutic procedure involving low-dose ionizing radiation in the first year after acute myocardial infarction. The cumulative exposure to radiation from cardiac procedures was 5.3 milliSieverts (mSv) per patient-year, of which 84% occurred during the first year after acute myocardial infarction. A total of 12 020 incident cancers were diagnosed during the follow-up period. There was a dose-dependent relation between exposure to radiation from cardiac procedures and subsequent risk of cancer. For every 10 mSv of low-dose ionizing radiation, there was a 3% increase in the risk of age- and sex-adjusted cancer over a mean follow-up period of five years (hazard ratio 1.003 per milliSievert, 95% confidence interval 1.002–1.004).

Interpretation

Exposure to low-dose ionizing radiation from cardiac imaging and therapeutic procedures after acute myocardial infarction is associated with an increased risk of cancer.Studies involving atomic bomb survivors have documented an increased incidence of malignant neoplasm related to the radiation exposure.14 Survivors who were farther from the epicentre of the blast had a lower incidence of cancer, whereas those who were closer had a higher incidence.5 Similar risk estimates have been reported among workers in nuclear plants.6 However, little is known about the relation between exposure to low-dose ionizing radiation from medical procedures and the risk of cancer.In the past six decades since the atomic bomb explosions, most individuals worldwide have had minimal exposure to ionizing radiation. However, the recent increase in the use of medical imaging and therapeutic procedures involving low-dose ionizing radiation has led to a growing concern that individual patients may be at increased risk of cancer.712 Whereas strict regulatory control is placed on occupational exposure at work sites, no such control exists among patients who are exposed to such radiation.1316It is not only the frequency of these procedures that is increasing. Newer types of imaging procedures are using higher doses of low-dose ionizing radiation than those used with more traditional procedures.8,11 Among patients being evaluated for coronary artery disease, for example, coronary computed tomography is increasingly being used. This test may be used in addition to other tests such as nuclear scans, coronary angiography and percutaneous coronary intervention, each of which exposes the patient to low-dose ionizing radiation.12,1721 Imaging procedures provide information that can be used to predict the prognosis of patients with coronary artery disease. Since such predictions do not necessarily translate into better clinical outcomes,8,12 the prognostic value obtained from imaging procedures using low-dose ionizing radiation needs to be balanced against the potential for risk.Authors of several studies have estimated that the risk of cancer is not negligible among patients exposed to low-dose ionizing radiation.2227 To our knowledge, none of these studies directly linked cumulative exposure and cancer risk. We examined a cohort of patients who had acute myocardial infarction and measured the association between low-dose ionizing radiation from cardiac imaging and therapeutic procedures and the risk of cancer.  相似文献   

16.

Background:

Recent warnings from Health Canada regarding codeine for children have led to increased use of nonsteroidal anti-inflammatory drugs and morphine for common injuries such as fractures. Our objective was to determine whether morphine administered orally has superior efficacy to ibuprofen in fracture-related pain.

Methods:

We used a parallel group, randomized, blinded superiority design. Children who presented to the emergency department with an uncomplicated extremity fracture were randomly assigned to receive either morphine (0.5 mg/kg orally) or ibuprofen (10 mg/kg) for 24 hours after discharge. Our primary outcome was the change in pain score using the Faces Pain Scale — Revised (FPS-R). Participants were asked to record pain scores immediately before and 30 minutes after receiving each dose.

Results:

We analyzed data from 66 participants in the morphine group and 68 participants in the ibuprofen group. For both morphine and ibuprofen, we found a reduction in pain scores (mean pre–post difference ± standard deviation for dose 1: morphine 1.5 ± 1.2, ibuprofen 1.3 ± 1.0, between-group difference [δ] 0.2 [95% confidence interval (CI) −0.2 to 0.6]; dose 2: morphine 1.3 ± 1.3, ibuprofen 1.3 ± 0.9, δ 0 [95% CI −0.4 to 0.4]; dose 3: morphine 1.3 ± 1.4, ibuprofen 1.4 ± 1.1, δ −0.1 [95% CI −0.7 to 0.4]; and dose 4: morphine 1.5 ± 1.4, ibuprofen 1.1 ± 1.2, δ 0.4 [95% CI −0.2 to 1.1]). We found no significant differences in the change in pain scores between morphine and ibuprofen between groups at any of the 4 time points (p = 0.6). Participants in the morphine group had significantly more adverse effects than those in the ibuprofen group (56.1% v. 30.9%, p < 0.01).

Interpretation:

We found no significant difference in analgesic efficacy between orally administered morphine and ibuprofen. However, morphine was associated with a significantly greater number of adverse effects. Our results suggest that ibuprofen remains safe and effective for outpatient pain management in children with uncomplicated fractures. Trial registration: ClinicalTrials.gov, no. NCT01690780.There is ample evidence that analgesia is underused,1 underprescribed,2 delayed in its administration2 and suboptimally dosed 3 in clinical settings. Children are particularly susceptible to suboptimal pain management4 and are less likely to receive opioid analgesia.5 Untreated pain in childhood has been reported to lead to short-term problems such as slower healing6 and to long-term issues such as anxiety, needle phobia,7 hyperesthesia8 and fear of medical care.9 The American Academy of Pediatrics has reaffirmed its advocacy for the appropriate use of analgesia for children with acute pain.10Fractures constitute between 10% and 25% of all injuries.11 The most severe pain after an injury occurs within the first 48 hours, with more than 80% of children showing compromise in at least 1 functional area.12 Low rates of analgesia have been reported after discharge from hospital.13 A recently improved understanding of the pharmacogenomics of codeine has raised significant concerns about its safety,14,15 and has led to a Food and Drug Administration boxed warning16 and a Health Canada advisory17 against its use. Although ibuprofen has been cited as the most common agent used by caregivers to treat musculoskeletal pain,12,13 there are concerns that its use as monotherapy may lead to inadequate pain management.6,18 Evidence suggests that orally administered morphine13 and other opioids are increasingly being prescribed.19 However, evidence for the oral administration of morphine in acute pain management is limited.20,21 Thus, additional studies are needed to address this gap in knowledge and provide a scientific basis for outpatient analgesic choices in children. Our objective was to assess if orally administered morphine is superior to ibuprofen in relieving pain in children with nonoperative fractures.  相似文献   

17.

Background

There is controversy about which children with minor head injury need to undergo computed tomography (CT). We aimed to develop a highly sensitive clinical decision rule for the use of CT in children with minor head injury.

Methods

For this multicentre cohort study, we enrolled consecutive children with blunt head trauma presenting with a score of 13–15 on the Glasgow Coma Scale and loss of consciousness, amnesia, disorientation, persistent vomiting or irritability. For each child, staff in the emergency department completed a standardized assessment form before any CT. The main outcomes were need for neurologic intervention and presence of brain injury as determined by CT. We developed a decision rule by using recursive partitioning to combine variables that were both reliable and strongly associated with the outcome measures and thus to find the best combinations of predictor variables that were highly sensitive for detecting the outcome measures with maximal specificity.

Results

Among the 3866 patients enrolled (mean age 9.2 years), 95 (2.5%) had a score of 13 on the Glasgow Coma Scale, 282 (7.3%) had a score of 14, and 3489 (90.2%) had a score of 15. CT revealed that 159 (4.1%) had a brain injury, and 24 (0.6%) underwent neurologic intervention. We derived a decision rule for CT of the head consisting of four high-risk factors (failure to reach score of 15 on the Glasgow coma scale within two hours, suspicion of open skull fracture, worsening headache and irritability) and three additional medium-risk factors (large, boggy hematoma of the scalp; signs of basal skull fracture; dangerous mechanism of injury). The high-risk factors were 100.0% sensitive (95% CI 86.2%–100.0%) for predicting the need for neurologic intervention and would require that 30.2% of patients undergo CT. The medium-risk factors resulted in 98.1% sensitivity (95% CI 94.6%–99.4%) for the prediction of brain injury by CT and would require that 52.0% of patients undergo CT.

Interpretation

The decision rule developed in this study identifies children at two levels of risk. Once the decision rule has been prospectively validated, it has the potential to standardize and improve the use of CT for children with minor head injury.Each year more than 650 000 children are seen in hospital emergency departments in North America with “minor head injury,” i.e., history of loss of consciousness, amnesia or disorientation in a patient who is conscious and responsive in the emergency department (Glasgow Coma Scale score1 13–15). Although most patients with minor head injury can be discharged after a period of observation, a small proportion experience deterioration of their condition and need to undergo neurosurgical intervention for intracranial hematoma.24 The use of computed tomography (CT) in the emergency department is important in the early diagnosis of these intracranial hematomas.Over the past decade the use of CT for minor head injury has become increasingly common, while its diagnostic yield has remained low. In Canadian pediatric emergency departments the use of CT for minor head injury increased from 15% in 1995 to 53% in 2005.5,6 Despite this increase, a small but important number of pediatric intracranial hematomas are missed in Canadian emergency departments at the first visit.3 Few children with minor head injury have a visible brain injury on CT (4%–7%), and only 0.5% have an intracranial lesion requiring urgent neurosurgical intervention.5,7 The increased use of CT adds substantially to health care costs and exposes a large number of children each year to the potentially harmful effects of ionizing radiation.8,9 Currently, there are no widely accepted, evidence-based guidelines on the use of CT for children with minor head injury.A clinical decision rule incorporates three or more variables from the history, physical examination or simple tests10.11 into a tool that helps clinicians to make diagnostic or therapeutic decisions at the bedside. Members of our group have developed decision rules to allow physicians to be more selective in the use of radiography for children with injuries of the ankle12 and knee,13 as well as for adults with injuries of the ankle,1417 knee,1820 head21,22 and cervical spine.23,24 The aim of this study was to prospectively derive an accurate and reliable clinical decision rule for the use of CT for children with minor head injury.  相似文献   

18.

Background

Cryotherapy is widely used for the treatment of cutaneous warts in primary care. However, evidence favours salicylic acid application. We compared the effectiveness of these treatments as well as a wait-and-see approach.

Methods

Consecutive patients with new cutaneous warts were recruited in 30 primary care practices in the Netherlands between May 1, 2006, and Jan. 26, 2007. We randomly allocated eligible patients to one of three groups: cryotherapy with liquid nitrogen every two weeks, self-application of salicylic acid daily or a wait-and-see approach. The primary outcome was the proportion of participants whose warts were all cured at 13 weeks. Analysis was on an intention-to-treat basis. Secondary outcomes included treatment adherence, side effects and treatment satisfaction. Research nurses assessed outcomes during home visits at 4, 13 and 26 weeks.

Results

Of the 250 participants (age 4 to 79 years), 240 were included in the analysis at 13 weeks (loss to follow-up 4%). Cure rates were 39% (95% confidence interval [CI] 29%–51%) in the cryotherapy group, 24% (95% CI 16%–35%) in the salicylic acid group and 16% (95% CI 9.5%–25%) in the wait-and-see group. Differences in effectiveness were most pronounced among participants with common warts (n = 116): cure rates were 49% (95% CI 34%–64%) in the cryotherapy group, 15% (95% CI 7%–30%) in the salicylic acid group and 8% (95% CI 3%–21%) in the wait-and-see group. Cure rates among the participants with plantar warts (n = 124) did not differ significantly between treatment groups.

Interpretation

For common warts, cryotherapy was the most effective therapy in primary care. For plantar warts, we found no clinically relevant difference in effectiveness between cryotherapy, topical application of salicylic acid or a wait-and-see approach after 13 weeks. (ClinicalTrial.gov registration no. ISRCTN42730629)Cutaneous warts are common.13 Up to one-third of primary school children have warts, of which two-thirds resolve within two years.4,5 Because warts frequently result in discomfort,6 2% of the general population and 6% of school-aged children each year present with warts to their family physician.7,8 The usual treatment is cryotherapy with liquid nitrogen or, less frequently, topical application of salicylic acid.912 Some physicians choose a wait-and-see approach because of the benign natural course of warts and the risk of side effects of treatment.10,11A recent Cochrane review on treatments of cutaneous warts concluded that available studies were small, poorly designed or limited to dermatology outpatients.10,11 Evidence on cryotherapy was contradictory,1318 whereas the evidence on salicylic acid was more convincing.1923 However, studies that compared cryotherapy and salicylic acid directly showed no differences in effectiveness.24,25 The Cochrane review called for high-quality trials in primary care to compare the effects of cryotherapy, salicylic acid and placebo.We conducted a three-arm randomized controlled trial to compare the effectiveness of cryotherapy with liquid nitrogen, topical application of salicylic acid and a wait-and-see approach for the treatment of common and plantar warts in primary care.  相似文献   

19.
20.

Background:

Some children feel pain during wound closures using tissue adhesives. We sought to determine whether a topically applied analgesic solution of lidocaine–epinephrine–tetracaine would decrease pain during tissue adhesive repair.

Methods:

We conducted a randomized, placebo-controlled, blinded trial involving 221 children between the ages of 3 months and 17 years. Patients were enrolled between March 2011 and January 2012 when presenting to a tertiary-care pediatric emergency department with lacerations requiring closure with tissue adhesive. Patients received either lidocaine–epinephrine–tetracaine or placebo before undergoing wound closure. Our primary outcome was the pain rating of adhesive application according to the colour Visual Analogue Scale and the Faces Pain Scale — Revised. Our secondary outcomes were physician ratings of difficulty of wound closure and wound hemostasis, in addition to their prediction as to which treatment the patient had received.

Results:

Children who received the analgesic before wound closure reported less pain (median 0.5, interquartile range [IQR] 0.25–1.50) than those who received placebo (median 1.00, IQR 0.38–2.50) as rated using the colour Visual Analogue Scale (p = 0.01) and Faces Pain Scale – Revised (median 0.00, IQR 0.00–2.00, for analgesic v. median 2.00, IQR 0.00–4.00, for placebo, p < 0.01). Patients who received the analgesic were significantly more likely to report having or to appear to have a pain-free procedure (relative risk [RR] of pain 0.54, 95% confidence interval [CI] 0.37–0.80). Complete hemostasis of the wound was also more common among patients who received lidocaine–epinephrine–tetracaine than among those who received placebo (78.2% v. 59.3%, p = 0.008).

Interpretation:

Treating minor lacerations with lidocaine–epinephrine–tetracaine before wound closure with tissue adhesive reduced ratings of pain and increased the proportion of pain-free repairs among children aged 3 months to 17 years. This low-risk intervention may benefit children with lacerations requiring tissue adhesives instead of sutures. Trial registration: ClinicalTrials.gov, no. PR 6138378804.Minor laceration repair with tissue adhesive, or “skin glue,” is common in pediatrics. Although less painful than cutaneous sutures,1 tissue adhesives polymerize through an exothermic reaction that may cause a burning, painful sensation. Pain is dependent on the specific formulation of the adhesive used and the method of application. One study of different tissue adhesives reported 23.8%–40.5% of participants feeling a “burning sensation”,2 whereas another study reported “pain” in 17.6%–44.1% of children.3 The amounts of adhesive applied, method of application and individual patient characteristics can also influence the feeling of pain.3,4 Because tissue adhesives polymerize on contact with moisture,4,5 poor wound hemostasis has the potential to cause premature setting of the adhesive, leading to less efficient and more painful repairs.6Preventing procedural pain is a high priority in pediatric care.7 Inadequate analgesia for pediatric procedures may result in more complicated procedures, increased pain sensitivity with future procedures8 and increased fear and anxiety of medical experiences persisting into adulthood.9 A practical method to prevent pain during laceration repairs with tissue adhesive would have a substantial benefit for children.A topically applied analgesic solution containing lidocaine–epinephrine–tetracaine with vasoconstrictive properties provides safe and effective pain control during wound repair using sutures.10 A survey of pediatric emergency fellowship directors in the United States reported that 76% of respondents use this solution or a similar solution when suturing 3-cm chin lacerations in toddlers.11 However, in a hospital chart review, this solution was used in less than half of tissue adhesive repairs, the remainder receiving either local injection of anesthetic or no pain control.12 Reluctance to use lidocaine–epinephrine–tetracaine with tissue adhesive may be due to the perception that it is not worth the minimum 20-minute wait required for the analgesic to take effect13 or to a lack of awareness that tissue adhesives can cause pain.We sought to investigate whether preapplying lidocaine–epinephrine–tetracaine would decrease pain in children during minor laceration repair using tissue adhesive.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号