首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Background:

There is uncertainty about the optimal approach to screen for clinically important cervical spine (C-spine) injury following blunt trauma. We conducted a systematic review to investigate the diagnostic accuracy of the Canadian C-spine rule and the National Emergency X-Radiography Utilization Study (NEXUS) criteria, 2 rules that are available to assist emergency physicians to assess the need for cervical spine imaging.

Methods:

We identified studies by an electronic search of CINAHL, Embase and MEDLINE. We included articles that reported on a cohort of patients who experienced blunt trauma and for whom clinically important cervical spine injury detectable by diagnostic imaging was the differential diagnosis; evaluated the diagnostic accuracy of the Canadian C-spine rule or NEXUS or both; and used an adequate reference standard. We assessed the methodologic quality using the Quality Assessment of Diagnostic Accuracy Studies criteria. We used the extracted data to calculate sensitivity, specificity, likelihood ratios and post-test probabilities.

Results:

We included 15 studies of modest methodologic quality. For the Canadian C-spine rule, sensitivity ranged from 0.90 to 1.00 and specificity ranged from 0.01 to 0.77. For NEXUS, sensitivity ranged from 0.83 to 1.00 and specificity ranged from 0.02 to 0.46. One study directly compared the accuracy of these 2 rules using the same cohort and found that the Canadian C-spine rule had better accuracy. For both rules, a negative test was more informative for reducing the probability of a clinically important cervical spine injury.

Interpretation:

Based on studies with modest methodologic quality and only one direct comparison, we found that the Canadian C-spine rule appears to have better diagnostic accuracy than the NEXUS criteria. Future studies need to follow rigorous methodologic procedures to ensure that the findings are as free of bias as possible.A clinically important cervical spine injury is defined as any fracture, dislocation or ligamentous instability detectable by diagnostic imaging and requiring surgical or specialist follow-up.1,2 These injuries can have disastrous consequences including spinal cord injury and death if the diagnosis is delayed or missed.3 Despite the low prevalence (< 3%) of clinically important cervical spinal injury following blunt trauma (e.g., motor vehicle collision), accurate diagnosis is imperative for safe, effective management.4 Currently, uncertainty exists about the optimal diagnostic approach. Some guidelines5,6 advocate using screening tools to identify patients with a higher likelihood of clinically important cervical spinal injury; these patients are then sent for imaging to establish the diagnosis. In other more conservative settings, all patients with blunt trauma are sent for imaging. The first approach, involving screening, is arguably preferable because it optimizes resources and time, while reducing unnecessary costs, radiation exposure and psychological stress for the patient.7 For screening to be safe and effective, the screening tools must have high sensitivity, a low negative likelihood ratio and a low rate of false positives. This assures clinicians that a clinically important cervical spine injury is unlikely and reduces the number of referrals for imaging.Clinical decision rules synthesize 3 or more findings from the patient’s history, physical examination or simple diagnostic tests to guide diagnostic and treatment decisions.8,9 Two clinical decision rules, the Canadian C-spine rule2 and the National Emergency X-Radiography Utilization Study (NEXUS; Box 1),10 are available to assess the need for imaging in patients with cervical spine injury following blunt trauma. These rules aim to reduce unnecessary imaging by reserving these investigations for patients with a higher likelihood of clinically important cervical spinal injury. Developed independently and validated using large cohorts of patients, these 2 decision rules are recommended in many international guidelines.5,11,12 However, no consensus exists as to which rule should be endorsed.1214 Therefore, the purpose of our systematic review was to describe the quality of research evaluating the Canadian C-spine rule and NEXUS; describe the diagnostic accuracy of the Canadian C-spine rule and NEXUS; and compare the diagnostic accuracy of the Canadian C-spine rule to that of NEXUS.

Box 1:

National Emergency X-Radiography Utilization Study (NEXUS) low-risk criteria10

Cervical spine radiography is indicated for patients with neck trauma unless they meet ALL of the following criteria:
  • No posterior midline cervical-spine tenderness
  • No evidence of intoxication
  • A normal level of alertness (score of 15 on the Glasgow Coma Scale)
  • No focal neurologic deficit
  • No painful distracting injuries
  相似文献   

2.
Background:Otitis media with effusion is a common problem that lacks an evidence-based nonsurgical treatment option. We assessed the clinical effectiveness of treatment with a nasal balloon device in a primary care setting.Methods:We conducted an open, pragmatic randomized controlled trial set in 43 family practices in the United Kingdom. Children aged 4–11 years with a recent history of ear symptoms and otitis media with effusion in 1 or both ears, confirmed by tympanometry, were allocated to receive either autoinflation 3 times daily for 1–3 months plus usual care or usual care alone. Clearance of middle-ear fluid at 1 and 3 months was assessed by experts masked to allocation.Results:Of 320 children enrolled, those receiving autoinflation were more likely than controls to have normal tympanograms at 1 month (47.3% [62/131] v. 35.6% [47/132]; adjusted relative risk [RR] 1.36, 95% confidence interval [CI] 0.99 to 1.88) and at 3 months (49.6% [62/125] v. 38.3% [46/120]; adjusted RR 1.37, 95% CI 1.03 to 1.83; number needed to treat = 9). Autoinflation produced greater improvements in ear-related quality of life (adjusted between-group difference in change from baseline in OMQ-14 [an ear-related measure of quality of life] score −0.42, 95% CI −0.63 to −0.22). Compliance was 89% at 1 month and 80% at 3 months. Adverse events were mild, infrequent and comparable between groups.Interpretation:Autoinflation in children aged 4–11 years with otitis media with effusion is feasible in primary care and effective both in clearing effusions and improving symptoms and ear-related child and parent quality of life. Trial registration: ISRCTN, No. 55208702.Otitis media with effusion, also known as glue ear, is an accumulation of fluid in the middle ear, without symptoms or signs of an acute ear infection. It is often associated with viral infection.13 The prevalence rises to 46% in children aged 4–5 years,4 when hearing difficulty, other ear-related symptoms and broader developmental concerns often bring the condition to medical attention.3,5,6 Middle-ear fluid is associated with conductive hearing losses of about 15–45 dB HL.7 Resolution is clinically unpredictable,810 with about a third of cases showing recurrence.11 In the United Kingdom, about 200 000 children with the condition are seen annually in primary care.12,13 Research suggests some children seen in primary care are as badly affected as those seen in hospital.7,9,14,15 In the United States, there were 2.2 million diagnosed episodes in 2004, costing an estimated $4.0 billion.16 Rates of ventilation tube surgery show variability between countries,1719 with a declining trend in the UK.20Initial clinical management consists of reasonable temporizing or delay before considering surgery.13 Unfortunately, all available medical treatments for otitis media with effusion such as antibiotics, antihistamines, decongestants and intranasal steroids are ineffective and have unwanted effects, and therefore cannot be recommended.2123 Not only are antibiotics ineffective, but resistance to them poses a major threat to public health.24,25 Although surgery is effective for a carefully selected minority,13,26,27 a simple low-cost, nonsurgical treatment option could benefit a much larger group of symptomatic children, with the purpose of addressing legitimate clinical concerns without incurring excessive delays.Autoinflation using a nasal balloon device is a low-cost intervention with the potential to be used more widely in primary care, but current evidence of its effectiveness is limited to several small hospital-based trials28 that found a higher rate of tympanometric resolution of ear fluid at 1 month.2931 Evidence of feasibility and effectiveness of autoinflation to inform wider clinical use is lacking.13,28 Thus we report here the findings of a large pragmatic trial of the clinical effectiveness of nasal balloon autoinflation in a spectrum of children with clinically confirmed otitis media with effusion identified from primary care.  相似文献   

3.

Background:

Little evidence exists on the effect of an energy-unrestricted healthy diet on metabolic syndrome. We evaluated the long-term effect of Mediterranean diets ad libitum on the incidence or reversion of metabolic syndrome.

Methods:

We performed a secondary analysis of the PREDIMED trial — a multicentre, randomized trial done between October 2003 and December 2010 that involved men and women (age 55–80 yr) at high risk for cardiovascular disease. Participants were randomly assigned to 1 of 3 dietary interventions: a Mediterranean diet supplemented with extra-virgin olive oil, a Mediterranean diet supplemented with nuts or advice on following a low-fat diet (the control group). The interventions did not include increased physical activity or weight loss as a goal. We analyzed available data from 5801 participants. We determined the effect of diet on incidence and reversion of metabolic syndrome using Cox regression analysis to calculate hazard ratios (HRs) and 95% confidence intervals (CIs).

Results:

Over 4.8 years of follow-up, metabolic syndrome developed in 960 (50.0%) of the 1919 participants who did not have the condition at baseline. The risk of developing metabolic syndrome did not differ between participants assigned to the control diet and those assigned to either of the Mediterranean diets (control v. olive oil HR 1.10, 95% CI 0.94–1.30, p = 0.231; control v. nuts HR 1.08, 95% CI 0.92–1.27, p = 0.3). Reversion occurred in 958 (28.2%) of the 3392 participants who had metabolic syndrome at baseline. Compared with the control group, participants on either Mediterranean diet were more likely to undergo reversion (control v. olive oil HR 1.35, 95% CI 1.15–1.58, p < 0.001; control v. nuts HR 1.28, 95% CI 1.08–1.51, p < 0.001). Participants in the group receiving olive oil supplementation showed significant decreases in both central obesity and high fasting glucose (p = 0.02); participants in the group supplemented with nuts showed a significant decrease in central obesity.

Interpretation:

A Mediterranean diet supplemented with either extra virgin olive oil or nuts is not associated with the onset of metabolic syndrome, but such diets are more likely to cause reversion of the condition. An energy-unrestricted Mediterranean diet may be useful in reducing the risks of central obesity and hyperglycemia in people at high risk of cardiovascular disease. Trial registration: ClinicalTrials.gov, no. ISRCTN35739639.Metabolic syndrome is a cluster of 3 or more related cardiometabolic risk factors: central obesity (determined by waist circumference), hypertension, hypertriglyceridemia, low plasma high-density lipoprotein (HDL) cholesterol levels and hyperglycemia. Having the syndrome increases a person’s risk for type 2 diabetes and cardiovascular disease.1,2 In addition, the condition is associated with increased morbidity and all-cause mortality.1,35 The worldwide prevalence of metabolic syndrome in adults approaches 25%68 and increases with age,7 especially among women,8,9 making it an important public health issue.Several studies have shown that lifestyle modifications,10 such as increased physical activity,11 adherence to a healthy diet12,13 or weight loss,1416 are associated with reversion of the metabolic syndrome and its components. However, little information exists as to whether changes in the overall dietary pattern without weight loss might also be effective in preventing and managing the condition.The Mediterranean diet is recognized as one of the healthiest dietary patterns. It has shown benefits in patients with cardiovascular disease17,18 and in the prevention and treatment of related conditions, such as diabetes,1921 hypertension22,23 and metabolic syndrome.24Several cross-sectional2529 and prospective3032 epidemiologic studies have suggested an inverse association between adherence to the Mediterranean diet and the prevalence or incidence of metabolic syndrome. Evidence from clinical trials has shown that an energy-restricted Mediterranean diet33 or adopting a Mediterranean diet after weight loss34 has a beneficial effect on metabolic syndrome. However, these studies did not determine whether the effect could be attributed to the weight loss or to the diets themselves.Seminal data from the PREDIMED (PREvención con DIeta MEDiterránea) study suggested that adherence to a Mediterranean diet supplemented with nuts reversed metabolic syndrome more so than advice to follow a low-fat diet.35 However, the report was based on data from only 1224 participants followed for 1 year. We have analyzed the data from the final PREDIMED cohort after a median follow-up of 4.8 years to determine the long-term effects of a Mediterranean diet on metabolic syndrome.  相似文献   

4.

Background

The Canadian CT Head Rule was developed to allow physicians to be more selective when ordering computed tomography (CT) imaging for patients with minor head injury. We sought to evaluate the effectiveness of implementing this validated decision rule at multiple emergency departments.

Methods

We conducted a matched-pair cluster-randomized trial that compared the outcomes of 4531 patients with minor head injury during two 12-month periods (before and after) at hospital emergency departments in Canada, six of which were randomly allocated as intervention sites and six as control sites. At the intervention sites, active strategies, including education, changes to policy and real-time reminders on radiologic requisitions were used to implement the Canadian CT Head Rule. The main outcome measure was referral for CT scan of the head.

Results

Baseline characteristics of patients were similar when comparing control to intervention sites. At the intervention sites, the proportion of patients referred for CT imaging increased from the “before” period (62.8%) to the “after” period (76.2%) (difference +13.3%, 95% CI 9.7%–17.0%). At the control sites, the proportion of CT imaging usage also increased, from 67.5% to 74.1% (difference +6.7%, 95% CI 2.6%–10.8%). The change in mean imaging rates from the “before” period to the “after” period for intervention versus control hospitals was not significant (p = 0.16). There were no missed brain injuries or adverse outcomes.

Interpretation

Our knowledge–translation-based trial of the Canadian CT Head Rule did not reduce rates of CT imaging in Canadian emergency departments. Future studies should identify strategies to deal with barriers to implementation of this decision rule and explore more effective approaches to knowledge translation. (ClinicalTrials.gov trial register no. NCT00993252)More than six million instances of head and neck trauma are seen annually in emergency departments in Canada and the United States.1 Most are classified as minimal or minor head injury, but in a very small proportion, deterioration occurs and neurosurgical intervention is needed for intracranial hematoma.2,3 In recent years, North American use of computed tomography (CT) for many conditions in the emergency department, including minor head injury, has increased five-fold.1,4 Our own Canadian data showed marked variation in the use of CT for similar patients.5 Over 90% of CT scans are negative for clinically important brain injury.68 Owing to its high volume of usage, such imaging adds to health care costs. There have also been increasing concerns about radiation-related risk from unnecessary CT scans.9,10 Additionally, unnecessary use of CT scanning compounds the Canadian problems of overcrowding of emergency departments and inadequate access to advanced imaging for nonemergency outpatients.Clinical decision rules are derived from original research and may be defined as tools for clinical decision-making that incorporate three or more variables from a patient’s history, physical examination or simple tests.1113 The Canadian CT Head Rule comprises five high-risk and two medium-risk criteria and was derived by prospectively evaluating 3121 adults with minor head injury (Figure 1) (Appendix 1, available at www.cmaj.ca/cgi/content/full/cmaj.091974/DC1).6 The resultant decision rule was then prospectively validated in a group of 2707 patients and showed high sensitivity (100%; 95% confidence interval [CI ] 91–100) and reliability.14 The results of its validation suggested that, in patients presenting to emergency departments with minor head trauma, a rate of usage of CT imaging as low as 62.4% was possible and safe.Open in a separate windowFigure 1The Canadian CT Head Rule, as used in the study. Note: CSF = cerebrospinal fluid, CT = computed tomography, GCS = Glasgow Coma Scale.Unfortunately, most decision rules are never used after derivation because they are not adequately tested in validation or implementation studies.1519 We recently successfully implemented a similar rule, the Canadian C-Spine Rule, at multiple Canadian sites.20 Hence, the goal of the current study was to evaluate the effectiveness and safety of an active strategy to implement the Canadian CT Head Rule at multiple emergency departments. We wanted to test both the impact of the rule on rates of CT imaging and the effectiveness of an inexpensive and easily adopted implementation strategy. In addition, we wanted to further evaluate the accuracy of the rule.  相似文献   

5.
6.

Background:

Recent warnings from Health Canada regarding codeine for children have led to increased use of nonsteroidal anti-inflammatory drugs and morphine for common injuries such as fractures. Our objective was to determine whether morphine administered orally has superior efficacy to ibuprofen in fracture-related pain.

Methods:

We used a parallel group, randomized, blinded superiority design. Children who presented to the emergency department with an uncomplicated extremity fracture were randomly assigned to receive either morphine (0.5 mg/kg orally) or ibuprofen (10 mg/kg) for 24 hours after discharge. Our primary outcome was the change in pain score using the Faces Pain Scale — Revised (FPS-R). Participants were asked to record pain scores immediately before and 30 minutes after receiving each dose.

Results:

We analyzed data from 66 participants in the morphine group and 68 participants in the ibuprofen group. For both morphine and ibuprofen, we found a reduction in pain scores (mean pre–post difference ± standard deviation for dose 1: morphine 1.5 ± 1.2, ibuprofen 1.3 ± 1.0, between-group difference [δ] 0.2 [95% confidence interval (CI) −0.2 to 0.6]; dose 2: morphine 1.3 ± 1.3, ibuprofen 1.3 ± 0.9, δ 0 [95% CI −0.4 to 0.4]; dose 3: morphine 1.3 ± 1.4, ibuprofen 1.4 ± 1.1, δ −0.1 [95% CI −0.7 to 0.4]; and dose 4: morphine 1.5 ± 1.4, ibuprofen 1.1 ± 1.2, δ 0.4 [95% CI −0.2 to 1.1]). We found no significant differences in the change in pain scores between morphine and ibuprofen between groups at any of the 4 time points (p = 0.6). Participants in the morphine group had significantly more adverse effects than those in the ibuprofen group (56.1% v. 30.9%, p < 0.01).

Interpretation:

We found no significant difference in analgesic efficacy between orally administered morphine and ibuprofen. However, morphine was associated with a significantly greater number of adverse effects. Our results suggest that ibuprofen remains safe and effective for outpatient pain management in children with uncomplicated fractures. Trial registration: ClinicalTrials.gov, no. NCT01690780.There is ample evidence that analgesia is underused,1 underprescribed,2 delayed in its administration2 and suboptimally dosed 3 in clinical settings. Children are particularly susceptible to suboptimal pain management4 and are less likely to receive opioid analgesia.5 Untreated pain in childhood has been reported to lead to short-term problems such as slower healing6 and to long-term issues such as anxiety, needle phobia,7 hyperesthesia8 and fear of medical care.9 The American Academy of Pediatrics has reaffirmed its advocacy for the appropriate use of analgesia for children with acute pain.10Fractures constitute between 10% and 25% of all injuries.11 The most severe pain after an injury occurs within the first 48 hours, with more than 80% of children showing compromise in at least 1 functional area.12 Low rates of analgesia have been reported after discharge from hospital.13 A recently improved understanding of the pharmacogenomics of codeine has raised significant concerns about its safety,14,15 and has led to a Food and Drug Administration boxed warning16 and a Health Canada advisory17 against its use. Although ibuprofen has been cited as the most common agent used by caregivers to treat musculoskeletal pain,12,13 there are concerns that its use as monotherapy may lead to inadequate pain management.6,18 Evidence suggests that orally administered morphine13 and other opioids are increasingly being prescribed.19 However, evidence for the oral administration of morphine in acute pain management is limited.20,21 Thus, additional studies are needed to address this gap in knowledge and provide a scientific basis for outpatient analgesic choices in children. Our objective was to assess if orally administered morphine is superior to ibuprofen in relieving pain in children with nonoperative fractures.  相似文献   

7.

Background

Preventive guidelines on cardiovascular risk management recommend lifestyle changes. Support for lifestyle changes may be a useful task for practice nurses, but the effect of such interventions in primary prevention is not clear. We examined the effect of involving patients in nurse-led cardiovascular risk management on lifestyle adherence and cardiovascular risk.

Methods

We performed a cluster randomized controlled trial in 25 practices that included 615 patients. The intervention consisted of nurse-led cardiovascular risk management, including risk assessment, risk communication, a decision aid and adapted motivational interviewing. The control group received a minimal nurse-led intervention. The self-reported outcome measures at one year were smoking, alcohol use, diet and physical activity. Nurses assessed 10-year cardiovascular mortality risk after one year.

Results

There were no significant differences between the intervention groups. The effect of the intervention on the consumption of vegetables and physical activity was small, and some differences were only significant for subgroups. The effects of the intervention on the intake of fat, fruit and alcohol and smoking were not significant. We found no effect between the groups for cardiovascular 10-year risk.

Interpretation

Nurse-led risk communication, use of a decision aid and adapted motivational interviewing did not lead to relevant differences between the groups in terms of lifestyle changes or cardiovascular risk, despite significant within-group differences.It is not clear if programs for lifestyle change are effective in the primary prevention of cardiovascular diseases. Some studies have shown lifestyle improvements with cardiovascular rehabilitation programs,13 and studies in primary prevention have suggested small, but potentially important, reductions in the risk of cardiovascular disease. However, these studies have had limitations and have recommended further research.4,5 According to national and international guidelines for cardiovascular risk management, measures to prevent cardiovascular disease, such as patient education and support for lifestyle change, can be delegated to practice nurses in primary care.68 However, we do not know whether the delivery of primary prevention programs by practice nurses is effective. We also do no know the effect of nurse-led prevention, including shared decision-making and risk communication, on cardiovascular risk.Because an unhealthy lifestyle plays an important role in the development of cardiovascular disease,9,10 preventive guidelines on cardiovascular disease and diabetes recommend education and counselling about smoking, diet, physical exercise and alcohol consumption for patients with moderately and highly increased risk.6,11 These patients are usually monitored in primary care practices. The adherence to lifestyle advice ranges from 20% to 90%,1215 and improving adherence requires effective interventions, comprising cognitive, behavioural and affective components (strategies to influence adherence to lifestyle advice via feelings and emotions or social relationships and social supports).16 Shared treatment decisions are highly preferred. Informed and shared decision-making requires that all information about the cardiovascular risk and the pros and cons of the risk-reduction options be shared with the patient, and that the patients’ individual values, personal resources and capacity for self-determination be respected.1719 In our cardiovascular risk reduction study,20 we developed an innovative implementation strategy that included a central role for practice nurses. Key elements of our intervention included risk assessment, risk communication, use of a decision aid and adapted motivational interviewing (Box 1).19,21,22

Box 1.?Key features of the nurse-led intervention

  • Risk assessment (intervention and control): The absolute 10-year mortality risk from cardiovascular diseases was assessed with use of a risk table from the Dutch guidelines (for patients without diabetes) or the UK Prospective Diabetes Study risk engine (for patients with diabetes).6,23 Nurses in the control group continued to provide usual care after this step.
  • Risk communication (intervention only): Nurses informed the patients of their absolute 10-year cardiovascular mortality risk using a risk communication tool developed for this study.2437
  • Decision support (intervention only): Nurses provied support to the patients using an updated decision aid.28 This tool facilitated the nurses’ interaction with the patients to arrive at informed, value-based choices for risk reduction. The tool provided information about the options and their associated relevant outcomes.
  • Adapted motivational interviewing (intervention only): Nurses discussed the options for risk reduction. The patient’s personal values were elicited using adapted motivational interviewing.
In the present study, we investigated whether a nurse-led intervention in primary care had a positive effect on lifestyle and 10-year cardiovascular risk. We hypothesized that involving patients in decision-making would increase adherence to lifestyle changes and decrease the absolute risk of 10-year cardiovascular mortality.  相似文献   

8.

Background:

Radial-head subluxation is an easily identified and treated injury. We investigated whether triage nurses in the emergency department can safely reduce radial-head subluxation at rates that are not substantially lower than those of emergency department physicians.

Methods:

We performed an open, noninferiority, cluster-randomized control trial. Children aged 6 years and younger who presented to the emergency department with a presentation consistent with radial-head subluxation and who had sustained a known injury in the previous 12 hours were assigned to either nurse-initiated or physician-initiated treatment, depending on the day. The primary outcome was the proportion of children who had a successful reduction (return to normal arm usage). We used a noninferiority margin of 10%.

Results:

In total, 268 children were eligible for inclusion and 245 were included in the final analysis. Of the children assigned to receive physician-initiated care, 96.7% (117/121) had a successful reduction performed by a physician. Of the children assigned to receive nurse-treatment care, 84.7% (105/124) had a successful reduction performed by a nurse. The difference in the proportion of successful radial head subluxations between the groups was 12.0% (95% confidence interval [CI] 4.8% to 19.7%). Noninferiority of nurse-initiated radial head subluxation was not shown.

Interpretation:

In this trial, the rate of successful radial-head subluxation performed by nurses was inferior to the physician success rate. Although the success rate in the nurse-initiated care group did not meet the non-inferiority margin, nurses were able to reduce radial head subluxation for almost 85% of children who presented with probable radial-head subluxation. Trial registration: Clinical Trials.gov, no. NCT00993954.Radial-head subluxation is a common arm injury among young children and often results in a visit to the emergency department.1 This type of injury occurs when forceful longitudinal traction is applied to an extended and pronated forearm.2 Radial-head subluxation is easily recognized by its clinical presentation and can be treated by a simple reduction technique involving hyperpronation or supination and flexion of the injured arm.37Despite the ease of diagnosis and treatment, children with radial-head subluxation often wait hours in the emergency department for a reduction that takes minutes to perform.8 These visits have direct health care costs and involve time and stress for the child and their family. Early treatment and shorter wait times correlate with patient satisfaction.9,10 Patient satisfaction is comparable when minor injuries are cared for by a nurse instead of by a physician.1113 Nurse-initiated treatments are increasingly a focus of health care.1417Treatment of radial-head subluxation is an appropriate area to consider nurse-initiated care. Our objective was to determine whether triage nurses, trained in the recognition and treatment of radial-head subluxation, could successfully reduce radial-head subluxation at a rate similar to that of physicians.  相似文献   

9.

Background

Preterm birth occurs in 5%–13% of pregnancies. It is a leading cause of perinatal mortality and morbidity and has adverse long-term consequences for the health of the child. Because of the role selenium plays in attenuating inflammation, and because low concentrations of selenium have been found in women with preeclampsia, we hypothesized that low maternal selenium status during early gestation would increase the risk of preterm birth.

Methods

White Dutch women with a singleton pregnancy (n = 1197) were followed prospectively from 12 weeks’ gestation. Women with thyroid disease or type 1 diabetes were excluded. At delivery, 1129 women had complete birth-outcome data. Serum concentrations of selenium were measured during the 12th week of pregnancy. Deliveries were classified as preterm or term, and preterm births were subcategorized as iatrogenic, spontaneous or the result of premature rupture of the membranes.

Results

Of the 60 women (5.3%) who had a preterm birth, 21 had premature rupture of the membranes and 13 had preeclampsia. The serum selenium concentration at 12 weeks’ gestation was significantly lower among women who had a preterm birth than among those who delivered at term (mean 0.96 [standard deviation (SD) 0.14] μmol/L v. 1.02 [SD 0.13] μmol/L; t = 2.9, p = 0.001). Women were grouped by quartile of serum selenium concentration at 12 weeks’ gestation. The number of women who had a preterm birth significantly differed by quartile (χ2 = 8.01, 3 degrees of freedom], p < 0.05). Women in the lowest quartile of serum selenium had twice the risk of preterm birth as women in the upper three quartiles, even after adjustment for the occurrence of preeclampsia (adjusted odds ratio 2.18, 95% confidence interval 1.25–3.77).

Interpretation

Having low serum selenium at the end of the first trimester was related to preterm birth and was independent of the mother having preeclampsia. Low maternal selenium status during early gestation may increase the risk of preterm premature rupture of the membranes, which is a major cause of preterm birth.Preterm birth occurs in 5%–13% of pregnancies and is a major public health concern worldwide.1 Preterm birth, defined as delivery before 37 weeks’ gestation, is the leading cause of perinatal mortality and morbidity.2 Short- and long-term consequences to the health of the child include cerebral palsy, respiratory distress syndrome, neurodevelopmental impairment, learning difficulties and behavioural problems.2 Despite substantial efforts to explain the mechanisms involved, the incidence of preterm birth is on the rise. For example, in the United States, the incidence increased from 9.5% in 1981 to 12.7% in 2005.1 Consequently, it is important to identify factors that may contribute to preterm birth, particularly those factors that are preventable.Maternal risk factors for preterm birth include a previous preterm delivery, black race, low socioeconomic status, poor nutrition or becoming pregnant soon after a previous delivery.1 Risk factors for preterm birth during gestation include multiple-gestation pregnancy and an intrauterine infection that triggers an inflammatory response.1,3 Endocrine conditions such as diabetes and dysfunction of the thyroid have also been associated with preterm birth, sometimes linked to preterm premature rupture of the membranes.4,5The trace mineral selenium, available from food (though to a greater or lesser extent according to region), can interact with a number of these risk factors.69 It has been implicated in pregnancy outcome,912 and it plays a role in the immune response and the body’s resistance to infection.6 Enzymes containing the mineral, selenoenzymes, can attenuate the excessive inflammatory response associated with adverse pregnancy outcomes, downregulating the expression of pro-inflammatory genes.68 A polymorphism in the gene encoding the selenoprotein SEPS1 has been shown to affect the risk of preeclampsia, a condition that has a strong inflammatory component that is an important cause of preterm birth.8 In addition, low selenium status has been identified in women with preeclampsia.12We hypothesized that low maternal selenium status (as measured by low serum selenium concentration early in gestation would be associated with preterm birth. Previous small studies have compared plasma selenium and plasma/erythrocyte glutathione peroxidase in mothers and their babies during both term and preterm deliveries. Although lower values have often been found in mothers who had their babies preterm than in mothers who had their babies at term, the findings were inconsistent.1315 We did a prospective study to assess selenium status in a large cohort of pregnant women who were followed from early gestation to delivery.  相似文献   

10.

Background

Belgium’s law on euthanasia allows only physicians to perform the act. We investigated the involvement of nurses in the decision-making and in the preparation and administration of life-ending drugs with a patient’s explicit request (euthanasia) or without an explicit request. We also examined factors associated with these deaths.

Methods

In 2007, we surveyed 1678 nurses who, in an earlier survey, had reported caring for one or more patients who received a potential life-ending decision within the year before the survey. Eligible nurses were surveyed about their most recent case.

Results

The response rate was 76%. Overall, 128 nurses reported having cared for a patient who received euthanasia and 120 for a patient who received life-ending drugs without his or her explicit request. Respectively, 64% (75/117) and 69% (81/118) of these nurses were involved in the physician’s decision-making process. More often this entailed an exchange of information on the patient’s condition or the patient’s or relatives’ wishes (45% [34/117] and 51% [41/118]) than sharing in the decision-making (24% [18/117] and 31% [25/118]). The life-ending drugs were administered by the nurse in 12% of the cases of euthanasia, as compared with 45% of the cases of assisted death without an explicit request. In both types of assisted death, the nurses acted on the physician’s orders but mostly in the physician’s absence. Factors significantly associated with a nurse administering the life-ending drugs included being a male nurse working in a hospital (odds ratio [OR] 40.07, 95% confidence interval [CI] 7.37–217.79) and the patient being over 80 years old (OR 5.57, 95% CI 1.98–15.70).

Interpretation

By administering the life-ending drugs in some of the cases of euthanasia, and in almost half of the cases without an explicit request from the patient, the nurses in our study operated beyond the legal margins of their profession.Medical end-of-life decisions with a possible or certain life-shortening effect occur often in end-of-life care.15 The most controversial and ethically debated medical practice is that in which drugs are administered with the intention of ending the patient’s life, whether at the patient’s explicit request (euthanasia) or not. The debate focuses mainly on the role and responsibilities of the physician.6 However, physicians worldwide have reported that nurses are also involved in these medical practices, mostly in the decision-making and sometimes in the administration of the life-ending drugs.13,79 Critical care,10 oncology11 and palliative care nurses12,13 have confirmed this by reporting their own involvement, particularly in cases of euthanasia.14,15In Belgium, the law permits physicians to perform euthanasia under strict requirements of due care, one of which is that they must discuss the request with the nurses involved.16 There are no further explicit stipulations determining the role of nurses in euthanasia. Physician-assisted death is legally regulated in some other countries as well (e.g., the Netherlands, Luxemburg and the US states of Oregon and Washington State), without specifying the role of nurses. Reports from nurses in these jurisdictions are scarce, apart from some that are limited to particular settings, or lack details about their involvement.13,14We conducted this study to investigate the involvement of nurses in Flanders, Belgium, in the decision-making and in the preparation and administration of life-ending drugs with, or without, a patient’s explicit request. We also examined patient- and nurse-related factors associated with the involvement of nurses in these deaths. In a related research article, Chambaere and colleagues describe the findings from a survey of physicians in Flanders about the practices of euthanasia and assisted suicide, and the use of life-ending drugs without an explicit request from the patient.17  相似文献   

11.
Background:Several clinical prediction rules for diagnosing group A streptococcal infection in children with pharyngitis are available. We aimed to compare the diagnostic accuracy of rules-based selective testing strategies in a prospective cohort of children with pharyngitis.Methods:We identified clinical prediction rules through a systematic search of MEDLINE and Embase (1975–2014), which we then validated in a prospective cohort involving French children who presented with pharyngitis during a 1-year period (2010–2011). We diagnosed infection with group A streptococcus using two throat swabs: one obtained for a rapid antigen detection test (StreptAtest, Dectrapharm) and one obtained for culture (reference standard). We validated rules-based selective testing strategies as follows: low risk of group A streptococcal infection, no further testing or antibiotic therapy needed; intermediate risk of infection, rapid antigen detection for all patients and antibiotic therapy for those with a positive test result; and high risk of infection, empiric antibiotic treatment.Results:We identified 8 clinical prediction rules, 6 of which could be prospectively validated. Sensitivity and specificity of rules-based selective testing strategies ranged from 66% (95% confidence interval [CI] 61–72) to 94% (95% CI 92–97) and from 40% (95% CI 35–45) to 88% (95% CI 85–91), respectively. Use of rapid antigen detection testing following the clinical prediction rule ranged from 24% (95% CI 21–27) to 86% (95% CI 84–89). None of the rules-based selective testing strategies achieved our diagnostic accuracy target (sensitivity and specificity > 85%).Interpretation:Rules-based selective testing strategies did not show sufficient diagnostic accuracy in this study population. The relevance of clinical prediction rules for determining which children with pharyngitis should undergo a rapid antigen detection test remains questionable.Pharyngitis accounts for about 6% of visits by children to primary care physicians each year in high-income nations.1 Group A streptococcus is found in 30%–40% of cases of childhood pharyngitis; the remaining cases are considered viral.2 Antibiotic treatment is indicated for group A streptococcal infection to prevent suppurative (e.g., retropharyngeal abscess and quinsy) and nonsuppurative complications (e.g., acute rheumatic fever and rheumatic heart disease) and to reduce the duration of symptoms and the spread of the condition.3 In settings where poststreptococcal diseases have become uncommon, such as Western Europe and North America,4 the public health goal is shifting from preventing complications to minimizing the inappropriate use of antibiotic agents to contain antimicrobial resistance.5 However, 60%–70% of the visits by children with pharyngitis to American primary care physicians result in antibiotic agents being prescribed.68Because signs and symptoms of streptococcal and viral pharyngitis overlap, most experts recommend that the diagnosis of group A streptococcal infection be confirmed by a throat culture or rapid antigen detection test.913 Whereas European guidelines suggest all children with pharyngitis undergo such testing,14 North American guidelines recommend that clinicians select patients on the basis of clinical and epidemiologic grounds.1113 Currently, there is no guidance from the Canadian Medical Association or Canadian Paediatric Society for the management of pharyngitis.Various clinical prediction rules that combine signs and symptoms have been proposed to help clinicians define groups of patients according to the clinical likelihood of group A streptococcal infection.1518 These rules aim to identify patients at low risk in whom the disease can be managed without further testing and without antibiotic treatment, and patients at high risk who could receive empiric antibiotic treatment without testing.16 Clinical prediction rules for pharyngitis have not been sufficiently validated for clinical practice and have never been compared head-to-head in a single pediatric population from a high-income country.18The purpose of our study was to externally validate and directly compare the diagnostic accuracy of relevant rules-based selective testing strategies with original data from a French prospective multicentre cohort of children with pharyngitis. To optimize this validation study, we first conducted a systematic review of existing clinical prediction rules.  相似文献   

12.

Background

The pathogenesis of appendicitis is unclear. We evaluated whether exposure to air pollution was associated with an increased incidence of appendicitis.

Methods

We identified 5191 adults who had been admitted to hospital with appendicitis between Apr. 1, 1999, and Dec. 31, 2006. The air pollutants studied were ozone, nitrogen dioxide, sulfur dioxide, carbon monoxide, and suspended particulate matter of less than 10 μ and less than 2.5 μ in diameter. We estimated the odds of appendicitis relative to short-term increases in concentrations of selected pollutants, alone and in combination, after controlling for temperature and relative humidity as well as the effects of age, sex and season.

Results

An increase in the interquartile range of the 5-day average of ozone was associated with appendicitis (odds ratio [OR] 1.14, 95% confidence interval [CI] 1.03–1.25). In summer (July–August), the effects were most pronounced for ozone (OR 1.32, 95% CI 1.10–1.57), sulfur dioxide (OR 1.30, 95% CI 1.03–1.63), nitrogen dioxide (OR 1.76, 95% CI 1.20–2.58), carbon monoxide (OR 1.35, 95% CI 1.01–1.80) and particulate matter less than 10 μ in diameter (OR 1.20, 95% CI 1.05–1.38). We observed a significant effect of the air pollutants in the summer months among men but not among women (e.g., OR for increase in the 5-day average of nitrogen dioxide 2.05, 95% CI 1.21–3.47, among men and 1.48, 95% CI 0.85–2.59, among women). The double-pollutant model of exposure to ozone and nitrogen dioxide in the summer months was associated with attenuation of the effects of ozone (OR 1.22, 95% CI 1.01–1.48) and nitrogen dioxide (OR 1.48, 95% CI 0.97–2.24).

Interpretation

Our findings suggest that some cases of appendicitis may be triggered by short-term exposure to air pollution. If these findings are confirmed, measures to improve air quality may help to decrease rates of appendicitis.Appendicitis was introduced into the medical vernacular in 1886.1 Since then, the prevailing theory of its pathogenesis implicated an obstruction of the appendiceal orifice by a fecalith or lymphoid hyperplasia.2 However, this notion does not completely account for variations in incidence observed by age,3,4 sex,3,4 ethnic background,3,4 family history,5 temporal–spatial clustering6 and seasonality,3,4 nor does it completely explain the trends in incidence of appendicitis in developed and developing nations.3,7,8The incidence of appendicitis increased dramatically in industrialized nations in the 19th century and in the early part of the 20th century.1 Without explanation, it decreased in the middle and latter part of the 20th century.3 The decrease coincided with legislation to improve air quality. For example, after the United States Clean Air Act was passed in 1970,9 the incidence of appendicitis decreased by 14.6% from 1970 to 1984.3 Likewise, a 36% drop in incidence was reported in the United Kingdom between 1975 and 199410 after legislation was passed in 1956 and 1968 to improve air quality and in the 1970s to control industrial sources of air pollution. Furthermore, appendicitis is less common in developing nations; however, as these countries become more industrialized, the incidence of appendicitis has been increasing.7Air pollution is known to be a risk factor for multiple conditions, to exacerbate disease states and to increase all-cause mortality.11 It has a direct effect on pulmonary diseases such as asthma11 and on nonpulmonary diseases including myocardial infarction, stroke and cancer.1113 Inflammation induced by exposure to air pollution contributes to some adverse health effects.1417 Similar to the effects of air pollution, a proinflammatory response has been associated with appendicitis.1820We conducted a case–crossover study involving a population-based cohort of patients admitted to hospital with appendicitis to determine whether short-term increases in concentrations of selected air pollutants were associated with hospital admission because of appendicitis.  相似文献   

13.

Background

Inuit have not experienced an epidemic in type 2 diabetes mellitus, and it has been speculated that they may be protected from obesity’s metabolic consequences. We conducted a population-based screening for diabetes among Inuit in the Canadian Arctic and evaluated the association of visceral adiposity with diabetes.

Methods

A total of 36 communities participated in the International Polar Year Inuit Health Survey. Of the 2796 Inuit households approached, 1901 (68%) participated, with 2595 participants. Households were randomly selected, and adult residents were invited to participate. Assessments included anthropometry and fasting plasma lipids and glucose, and, because of survey logistics, only 32% of participants underwent a 75 g oral glucose tolerance test. We calculated weighted prevalence estimates of metabolic risk factors for all participants.

Results

Participants’ mean age was 43.3 years; 35% were obese, 43.8% had an at-risk waist, and 25% had an elevated triglyceride level. Diabetes was identified in 12.2% of participants aged 50 years and older and in 1.9% of those younger than 50 years. A hypertriglyceridemic-waist phenotype was a strong predictor of diabetes (odds ratio [OR] 8.6, 95% confidence interval [CI] 2.1–34.6) in analyses adjusted for age, sex, region, family history of diabetes, education and use of lipid-lowering medications.

Interpretation

Metabolic risk factors were prevalent among Inuit. Our results suggest that Inuit are not protected from the metabolic consequences of obesity, and that their rate of diabetes prevalence is now comparable to that observed in the general Canadian population. Assessment of waist circumference and fasting triglyceride levels could represent an efficient means for identifying Inuit at high risk for diabetes.Indigenous people across the Arctic continue to undergo cultural transitions that affect all dimensions of life, with implications for emerging obesity and changes in patterns of disease burden.13 A high prevalence of obesity among Canadian Inuit has been noted,3,4 and yet studies have suggested that the metabolic consequences of obesity may not be as severe among Inuit as they are in predominantly Caucasian or First Nations populations.46 Conversely, the prevalence of type 2 diabetes mellitus, which was noted to be rare among Inuit in early studies,7,8 now matches or exceeds that of predominately Caucasian comparison populations in Alaska and Greenland.911 However, in Canada, available reports suggest that diabetes prevalence among Inuit remains below that of the general Canadian population.3,12Given the rapid changes in the Arctic and a lack of comprehensive and uniform screening assessments, we used the International Polar Year Inuit Health Survey for Adults 2007–2008 to assess the current prevalence of glycemia and the toll of age and adiposity on glycemia in this population. However, adiposity is heterogeneous, and simple measures of body mass index (BMI) in kg/m2 and waist circumference do not measure visceral adiposity (or intra-abdominal adipose tissue), which is considered more deleterious than subcutaneous fat.13 Therefore, we evaluated the “hypertriglyceridemic-waist” phenotype (i.e., the presence of both an at-risk waist circumference and an elevated triglyceride level) as a proxy indicator of visceral fat.1315  相似文献   

14.
Riediger ND  Clara I 《CMAJ》2011,183(15):E1127-E1134

Background:

Metabolic syndrome refers to a constellation of conditions that increases a person’s risk of diabetes and cardiovascular disease. We describe the prevalence of metabolic syndrome and its components in relation to sociodemographic factors in the Canadian adult population.

Methods:

We used data from cycle 1 of the Canadian Health Measures Survey, a cross-sectional survey of a representative sample of the population. We included data for respondents aged 18 years and older for whom fasting blood samples were available; pregnant women were excluded. We calculated weighted estimates of the prevalence of metabolic syndrome and its components in relation to age, sex, education level and income.

Results:

The estimated prevalence of metabolic syndrome was 19.1%. Age was the strongest predictor of the syndrome: 17.0% of participants 18–39 years old had metabolic syndrome, as compared with 39.0% of those 70–79 years. Abdominal obesity was the most common component of the syndrome (35.0%) and was more prevalent among women than among men (40.0% v. 29.1%; p = 0.013). Men were more likely than women to have an elevated fasting glucose level (18.9% v. 13.6%; p = 0.025) and hypertriglyceridemia (29.0% v. 20.0%; p = 0.012). The prevalence of metabolic syndrome was higher among people in households with lower education and income levels.

Interpretation:

About one in five Canadian adults had metabolic syndrome. People at increased risk were those in households with lower education and income levels. The burden of abdominal obesity, low HDL (high-density lipoprotein) cholesterol and hypertriglyceridemia among young people was especially of concern, because the risk of cardiovascular disease increases with age.Chronic disease contributes significantly to morbidity and mortality in the Canadian population.1 As such, the economic costs are substantial. Metabolic syndrome refers to a constellation of conditions that approximately doubles a person’s risk of cardiovascular disease, independently of other risk factors.25 The cause of metabolic syndrome has not been fully elucidated; a summary of the current proposed mechanisms is discussed elsewhere.6Several sets of criteria have been established for the detection of metabolic syndrome, many of which have been continually updated.68 The set of criteria most commonly used in the past was published in the third report of the National Cholesterol Education Program Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults (Adult Treatment Panel III criteria).9 Recently, the International Diabetes Federation, the American Heart Association, the National Heart, Lung, and Blood Institute, and other organizations collaborated to release a unified set of criteria.10The Canadian Health Measures Survey, conducted in 2007–2009, was the first cross-sectional survey of a representative sample of Canadians that collected biological samples since the Canadian Heart Health Surveys about 20 years ago.11 We used data from the Canadian Health Measures Survey to describe the prevalence of metabolic syndrome and its components by age, sex, education level and income adequacy in a sample of the Canadian adult population. Because different studies have used various criteria in the past to define metabolic syndrome, and because there is continuing controversy as to the appropriate criteria, we calculated the prevalence according to several types of criteria to better facilitate comparison to findings from past and future studies.  相似文献   

15.

Background

Recent studies have reported a high prevalence of relative adrenal insufficiency in patients with liver cirrhosis. However, the effect of corticosteroid replacement on mortality in this high-risk group remains unclear. We examined the effect of low-dose hydrocortisone in patients with cirrhosis who presented with septic shock.

Methods

We enrolled patients with cirrhosis and septic shock aged 18 years or older in a randomized double-blind placebo-controlled trial. Relative adrenal insufficiency was defined as a serum cortisol increase of less than 250 nmol/L or 9 μg/dL from baseline after stimulation with 250 μg of intravenous corticotropin. Patients were assigned to receive 50 mg of intravenous hydrocortisone or placebo every six hours until hemodynamic stability was achieved, followed by steroid tapering over eight days. The primary outcome was 28-day all-cause mortality.

Results

The trial was stopped for futility at interim analysis after 75 patients were enrolled. Relative adrenal insufficiency was diagnosed in 76% of patients. Compared with the placebo group (n = 36), patients in the hydrocortisone group (n = 39) had a significant reduction in vasopressor doses and higher rates of shock reversal (relative risk [RR] 1.58, 95% confidence interval [CI] 0.98–2.55, p = 0.05). Hydrocortisone use was not associated with a reduction in 28-day mortality (RR 1.17, 95% CI 0.92–1.49, p = 0.19) but was associated with an increase in shock relapse (RR 2.58, 95% CI 1.04–6.45, p = 0.03) and gastrointestinal bleeding (RR 3.00, 95% CI 1.08–8.36, p = 0.02).

Interpretation

Relative adrenal insufficiency was very common in patients with cirrhosis presenting with septic shock. Despite initial favourable effects on hemodynamic parameters, hydrocortisone therapy did not reduce mortality and was associated with an increase in adverse effects. (Current Controlled Trials registry no. ISRCTN99675218.)Cirrhosis is a leading cause of death worldwide,1 often with septic shock as the terminal event.29 Relative adrenal insufficiency shares similar features of distributive hyperdynamic shock with both cirrhosis and sepsis10,11 and increasingly has been reported to coexist with both conditions.11,12 The effect of low-dose hydrocortisone therapy on survival of critically ill patients in general with septic shock remains controversial, with conflicting results from randomized controlled trials1317 and meta-analyses.18,19 The effect of hydrocortisone therapy on mortality among patients with cirrhosis, who are known to be a group at high risk for relative adrenal insufficiency, has not been studied and hence was the objective of our study.  相似文献   

16.

Background:

Recent guidelines suggest lowering the target blood pressure for patients with chronic kidney disease, although the strength of evidence for this suggestion has been uncertain. We sought to assess the renal and cardiovascular effects of intensive blood pressure lowering in people with chronic kidney disease.

Methods:

We performed a systematic review and meta-analysis of all relevant reports published between 1950 and July 2011 identified in a search of MEDLINE, Embase and the Cochrane Library. We included randomized trials that assigned patients with chronic kidney disease to different target blood pressure levels and reported kidney failure or cardiovascular events. Two reviewers independently identified relevant articles and extracted data.

Results:

We identified 11 trials providing information on 9287 patients with chronic kidney disease and 1264 kidney failure events (defined as either a composite of doubling of serum creatinine level and 50% decline in glomerular filtration rate, or end-stage kidney disease). Compared with standard regimens, a more intensive blood pressure–lowering strategy reduced the risk of the composite outcome (hazard ratio [HR] 0.82, 95% confidence interval [CI] 0.68–0.98) and end-stage kidney disease (HR 0.79, 95% CI 0.67–0.93). Subgroup analysis showed effect modification by baseline proteinuria (p = 0.006) and markers of trial quality. Intensive blood pressure lowering reduced the risk of kidney failure (HR 0.73, 95% CI 0.62–0.86), but not in patients without proteinuria at baseline (HR 1.12, 95% CI 0.67–1.87). There was no clear effect on the risk of cardiovascular events or death.

Interpretation:

Intensive blood pressure lowering appears to provide protection against kidney failure events in patients with chronic kidney disease, particularly among those with proteinuria. More data are required to determine the effects of such a strategy among patients without proteinuria.Chronic kidney disease is a major public health problem worldwide, affecting 10%–15% of the adult population.1 Blood pressure–lowering agents are the mainstay of management strategies aiming to slow the progression of chronic kidney disease, as well as a core aspect of strategies aiming to reduce cardiovascular risk.24 Observational studies have shown a log-linear increase in the risk of kidney failure with high blood pressure levels across the observed range,57 suggesting that further lowering blood pressure could reduce the risk of kidney failure at most blood pressure levels. Current guidelines recommend a blood pressure target below 130/80 mm Hg for patients with chronic kidney disease,810 but this recommendation is mostly based on observational studies and a single randomized trial (the Modification of Diet in Renal Disease [MDRD] study) that focused on kidney protection.11 Subsequent trials of different targets in people with chronic kidney disease have yielded inconsistent results,12,13,14 leading to criticism by the recent Canadian Hypertension Education Program guideline (which suggested a less aggressive target) of other guidelines, with suggestions that their blood pressure recommendations went beyond the available evidence. This criticism has been supported by a recent systematic review (no meta-analysis was performed) that focused on 3 trials and reported inconclusive results overall but raised the possibility that proteinuria was an effect modifier.15 The final result has been clinician uncertainty about optimal blood pressure levels in patients with chronic kidney disease.We sought to synthesize the results of all available trials that evaluated the effects of different blood pressure targets in people with chronic kidney disease and to better define the balance of risks and benefits associated with different intensities of blood pressure lowering in this population.  相似文献   

17.

Background

Poor muscular strength has been shown to be associated with increased morbidity and mortality in diverse samples of middle-aged and elderly people. However, the oldest old population (i.e., over 85 years) is underrepresented in such studies. Our objective was to assess the association between muscular strength and mortality in the oldest old population.

Methods

We included 555 participants (65% women) from the Leiden 85-plus study, a prospective population-based study of all 85-year-old inhabitants of Leiden, Netherlands. We measured the handgrip strength of participants at baseline and again at age 89 years. We collected baseline data on comorbidities, functional status, levels of physical activity, and adjusted for potential confounders. During the follow-up period, we collected data on mortality.

Results

During a follow-up period of 9.5 years (range 8.5–10.5 years), 444 (80%) participants died. Risk for all-cause mortality was elevated among participants in the lowest tertile of handgrip strength at age 85 years (hazard ratio [HR] 1.35, 95% confidence interval [CI] 1.00–1.82, p = 0.047) and the lowest two tertiles of handgrip strength at age 89 years (HR 2.04, CI 1.24–3.35, p = 0.005 and HR 1.73, CI 1.11–2.70, p = 0.016). We also observed significantly increased mortality among participants in the tertile with the highest relative loss of handgrip strength over four years (HR 1.72, CI 1.07–2.77, p = 0.026).

Interpretation

Handgrip strength, a surrogate measurement of overall muscular strength, is a predictor of all-cause mortality in the oldest old population and may serve as a convenient tool for prognostication of mortality risk among elderly people.The fastest growing segment of the elderly population is the group older than 85 years, which is classified as the oldest old age group.1,2 The average rate of growth of this group is reported to be 3.8% annually at a global level. By 2050, the oldest old age group will account for one-fifth of all older persons.2Inactivity is a major problem in this age group, owing to an increased prevalence of medical comorbidities and physical disability with age. Age-related stereotypes and misconceptions (e.g., that older people are invariably unhealthy), coupled with a perceived lack of benefits provided by physical activity, can also represent obstacles to exercise among the oldest old population.The predisposing influence of a sedentary lifestyle on age-related cardiometabolic diseases (i.e., obesity, type 2 diabetes mellitus, hypertension and coronary artery disease) is well established. Evidence of the protective effects of physical activity against certain cancers, falls and mental health problems is accumulating.3,4 Lack of exercise is also a significant risk factor for sarcopenia,5,6 a progressive loss of skeletal muscle mass and strength with aging.7 Sarcopenia is highly prevalent among those aged 80 years and older, with reported rates exceeding 50%.8 Reduced muscular strength is associated in turn with outcomes such as physical disability,9,10 cognitive decline11 and mortality.12,13Handgrip strength, a simple bedside tool, has been shown to be a valid surrogate measurement of overall muscular strength.14,15 A recent systematic review has shown that low handgrip strength is associated consistently with premature mortality, disability and other health-related complications among various samples of middle-aged and older people.16 Despite its prognostic value, handgrip dynamometry is rarely used in routine geriatric assessment. Epidemiologic studies evaluating the relation in the population of the oldest old are also lacking. We tested the association between handgrip strength and mortality in a prospective population-based study of the oldest old age group. We obtained approval for our study from the Medical Ethical Committee of the Leiden University Medical Center, and informed consent from all participants.  相似文献   

18.
19.

Background:

Shorter resident duty periods are increasingly mandated to improve patient safety and physician well-being. However, increases in continuity-related errors may counteract the purported benefits of reducing fatigue. We evaluated the effects of 3 resident schedules in the intensive care unit (ICU) on patient safety, resident well-being and continuity of care.

Methods:

Residents in 2 university-affiliated ICUs were randomly assigned (in 2-month rotation-blocks from January to June 2009) to in-house overnight schedules of 24, 16 or 12 hours. The primary patient outcome was adverse events. The primary resident outcome was sleepiness, measured by the 7-point Stanford Sleepiness Scale. Secondary outcomes were patient deaths, preventable adverse events, and residents’ physical symptoms and burnout. Continuity of care and perceptions of ICU staff were also assessed.

Results:

We evaluated 47 (96%) of 49 residents, all 971 admissions, 5894 patient-days and 452 staff surveys. We found no effect of schedule (24-, 16- or 12-h shifts) on adverse events (81.3, 76.3 and 78.2 events per 1000 patient-days, respectively; p = 0.7) or on residents’ sleepiness in the daytime (mean rating 2.33, 2.61 and 2.30, respectively; p = 0.3) or at night (mean rating 3.06, 2.73 and 2.42, respectively; p = 0.2). Seven of 8 preventable adverse events occurred with the 12-hour schedule (p = 0.1). Mortality rates were similar for the 3 schedules. Residents’ somatic symptoms were more severe and more frequent with the 24-hour schedule (p = 0.04); however, burnout was similar across the groups. ICU staff rated residents’ knowledge and decision-making worst with the 16-hour schedule.

Interpretation:

Our findings do not support the purported advantages of shorter duty schedules. They also highlight the trade-offs between residents’ symptoms and multiple secondary measures of patient safety. Further delineation of this emerging signal is required before widespread system change. Trial registration: ClinicalTrials.gov, no. NCT00679809.Physician fatigue is common, is associated with worse physician well-being and more medical errors than for well-rested clinicians, and may compromise patient safety.13 Shorter duty hours are purported to address these concerns,4,5 but they necessitate more care transitions, which increases the risk of information loss.6,7 The net effect on patient safety therefore depends on the relative balance between fatigue and continuity.813 Currently, high-quality data to guide scheduling decisions are limited.The complexity, acuity and therapeutic intensity of patients’ conditions and their care make the intensive care unit (ICU) an ideal environment to evaluate the trade-offs between physician fatigue and continuity. Prior randomized studies have evaluated data for interns14 or intensivists,15 rather than the residents who provide most in-house overnight care in Canadian ICUs.16 We evaluated the impact of 3 commonly used schedules5 on patient safety and resident well-being.  相似文献   

20.

Background

There is controversy about which children with minor head injury need to undergo computed tomography (CT). We aimed to develop a highly sensitive clinical decision rule for the use of CT in children with minor head injury.

Methods

For this multicentre cohort study, we enrolled consecutive children with blunt head trauma presenting with a score of 13–15 on the Glasgow Coma Scale and loss of consciousness, amnesia, disorientation, persistent vomiting or irritability. For each child, staff in the emergency department completed a standardized assessment form before any CT. The main outcomes were need for neurologic intervention and presence of brain injury as determined by CT. We developed a decision rule by using recursive partitioning to combine variables that were both reliable and strongly associated with the outcome measures and thus to find the best combinations of predictor variables that were highly sensitive for detecting the outcome measures with maximal specificity.

Results

Among the 3866 patients enrolled (mean age 9.2 years), 95 (2.5%) had a score of 13 on the Glasgow Coma Scale, 282 (7.3%) had a score of 14, and 3489 (90.2%) had a score of 15. CT revealed that 159 (4.1%) had a brain injury, and 24 (0.6%) underwent neurologic intervention. We derived a decision rule for CT of the head consisting of four high-risk factors (failure to reach score of 15 on the Glasgow coma scale within two hours, suspicion of open skull fracture, worsening headache and irritability) and three additional medium-risk factors (large, boggy hematoma of the scalp; signs of basal skull fracture; dangerous mechanism of injury). The high-risk factors were 100.0% sensitive (95% CI 86.2%–100.0%) for predicting the need for neurologic intervention and would require that 30.2% of patients undergo CT. The medium-risk factors resulted in 98.1% sensitivity (95% CI 94.6%–99.4%) for the prediction of brain injury by CT and would require that 52.0% of patients undergo CT.

Interpretation

The decision rule developed in this study identifies children at two levels of risk. Once the decision rule has been prospectively validated, it has the potential to standardize and improve the use of CT for children with minor head injury.Each year more than 650 000 children are seen in hospital emergency departments in North America with “minor head injury,” i.e., history of loss of consciousness, amnesia or disorientation in a patient who is conscious and responsive in the emergency department (Glasgow Coma Scale score1 13–15). Although most patients with minor head injury can be discharged after a period of observation, a small proportion experience deterioration of their condition and need to undergo neurosurgical intervention for intracranial hematoma.24 The use of computed tomography (CT) in the emergency department is important in the early diagnosis of these intracranial hematomas.Over the past decade the use of CT for minor head injury has become increasingly common, while its diagnostic yield has remained low. In Canadian pediatric emergency departments the use of CT for minor head injury increased from 15% in 1995 to 53% in 2005.5,6 Despite this increase, a small but important number of pediatric intracranial hematomas are missed in Canadian emergency departments at the first visit.3 Few children with minor head injury have a visible brain injury on CT (4%–7%), and only 0.5% have an intracranial lesion requiring urgent neurosurgical intervention.5,7 The increased use of CT adds substantially to health care costs and exposes a large number of children each year to the potentially harmful effects of ionizing radiation.8,9 Currently, there are no widely accepted, evidence-based guidelines on the use of CT for children with minor head injury.A clinical decision rule incorporates three or more variables from the history, physical examination or simple tests10.11 into a tool that helps clinicians to make diagnostic or therapeutic decisions at the bedside. Members of our group have developed decision rules to allow physicians to be more selective in the use of radiography for children with injuries of the ankle12 and knee,13 as well as for adults with injuries of the ankle,1417 knee,1820 head21,22 and cervical spine.23,24 The aim of this study was to prospectively derive an accurate and reliable clinical decision rule for the use of CT for children with minor head injury.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号