首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 421 毫秒
1.

Background:

Low socioeconomic status is associated with poor cardiovascular health. We evaluated the association between socioeconomic status and the incidence of sudden cardiac arrest, a condition that accounts for a substantial proportion of cardiovascular-related deaths, in seven large North American urban populations.

Methods:

Using a population-based registry, we collected data on out-of-hospital sudden cardiac arrests occurring at home or at a residential institution from Apr. 1, 2006, to Mar. 31, 2007. We limited the analysis to cardiac arrests in seven metropolitan areas in the United States (Dallas, Texas; Pittsburgh, Pennsylvania; Portland, Oregon; and Seattle–King County, Washington) and Canada (Ottawa and Toronto, Ontario; and Vancouver, British Columbia). Each incident was linked to a census tract; tracts were classified into quartiles of median household income.

Results:

A total of 9235 sudden cardiac arrests were included in the analysis. For all sites combined, the incidence of sudden cardiac arrestin the lowest socioeconomic quartile was nearly double that in the highest quartile (incidence rate ratio [IRR] 1.9, 95% confidence interval [CI] 1.8–2.0). This disparity was greater among people less than 65 years old (IRR 2.7, 95% CI 2.5–3.0) than among those 65 or older (IRR 1.3, 95% CI 1.2–1.4). After adjustment for study site and for population age structure of each census tract, the disparity across socioeconomic quartiles for all ages combined was greater in the United States (IRR 2.0, 95% CI 1.9–2.2) than in Canada (IRR 1.8, 95% CI 1.6–2.0) (p < 0.001 for interaction).

Interpretation:

The incidence of sudden cardiac arrest at home or at a residential institution was higher in poorer neighbourhoods of the US and Canadian sites studied, although the association was attenuated in Canada. The disparity across socioeconomic quartiles was greatest among people younger than 65. The association between socioeconomic status and incidence of sudden cardiac arrest merits consideration in the development of strategies to improve survival from sudden cardiac arrest, and possibly to identify opportunities for prevention.An estimated 250 000–300 000 sudden cardiac arrests occur each year in the United States,1 accounting for up to 63% of cardiac-related deaths annually.2 Despite advances in resuscitation, more than 95% of people who experience sudden cardiac arrest die,3 and up to 50% of sudden cardiac arrests occur in people who do not have a history of coronary artery disease.4Socioeconomic status has been shown to predict many health outcomes, including all-cause mortality,5 prevalence of risk factors for cardiovascular disease6 and incidence of cardiovascular disease.79 Despite this substantial literature, we found only three studies that examined the potential association between socioeconomic status and sudden cardiac arrest. Although the studies were small and conducted in single communities, each showed that the incidence of sudden cardiac arrest was significantly higher in lower socioeconomic areas.1012 The Oregon Sudden Unexplained Death Study (Ore-SUDS) reported a 30%–80% higher incidence of sudden cardiac arrest in poorer neighbourhoods. A stronger association was observed among people less than 65 years old, a group for whom basic health care funding is not guaranteed in the United States.11Low socioeconomic status may be linked to an increased risk of sudden cardiac arrest by a variety of mechanisms related to individual risk factors or health-promoting behaviours or neighbourhood characteristics. Individuals of lower socioeconomic status have been found to have a greater burden of risk factors for cardiovascular disease,13 poorer control of established cardiovascular risk factors14 and longer delays in seeking hospital care for acute myocardial infarction.15 Numerous studies have also shown that disparities in health outcomes are apparent across the spectrum of socioeconomic status.16A better understanding of community-level patterns in the distribution of sudden cardiac arrest may identify opportunities for improving survival, such as effective targeting of community training for cardiopulmonary resuscitation and placement of automated external defibrillators in lower-income communities. We tested the hypothesis that disparities in the incidence of sudden cardiac arrest by level of socioeconomic status would be evident in a variety of urban communities in the United States and Canada, and that this association would be most prominent among people less than 65 years old residing in US communities.  相似文献   

2.

Background:

The increasing number of people living in high-rise buildings presents unique challenges to care and may cause delays for 911-initiated first responders (including paramedics and fire department personnel) responding to calls for out-of-hospital cardiac arrest. We examined the relation between floor of patient contact and survival after cardiac arrest in residential buildings.

Methods:

We conducted a retrospective observational study using data from the Toronto Regional RescuNet Epistry database for the period January 2007 to December 2012. We included all adult patients (≥ 18 yr) with out-of-hospital cardiac arrest of no obvious cause who were treated in private residences. We excluded cardiac arrests witnessed by 911-initiated first responders and those with an obvious cause. We used multivariable logistic regression to determine the effect on survival of the floor of patient contact, with adjustment for standard Utstein variables.

Results:

During the study period, 7842 cases of out-of-hospital cardiac arrest met the inclusion criteria, of which 5998 (76.5%) occurred below the third floor and 1844 (23.5%) occurred on the third floor or higher. Survival was greater on the lower floors (4.2% v. 2.6%, p = 0.002). Lower adjusted survival to hospital discharge was independently associated with higher floor of patient contact, older age, male sex and longer 911 response time. In an analysis by floor, survival was 0.9% above floor 16 (i.e., below the 1% threshold for futility), and there were no survivors above the 25th floor.

Interpretation:

In high-rise buildings, the survival rate after out-of-hospital cardiac arrest was lower for patients residing on higher floors. Interventions aimed at shortening response times to treatment of cardiac arrest in high-rise buildings may increase survival.More than 400 000 out-of-hospital cardiac arrests occur annually in North America.1,2 Despite considerable effort to improve resuscitation care, survival to hospital discharge in most communities remains below 10%.2 Rapid defibrillation and high-quality cardiopulmonary resuscitation (CPR) are essential for survival, with an absolute decrease in survival of 7% to 10% for each 1-minute delay to defibrillation.35Recently, there has been a dramatic increase in the number of people living in high-rise buildings (e.g., a 13% relative increase in Toronto from 2006 to 20116,7). As more high-rise buildings are constructed in urban centres across Canada, the number of 911 calls for emergency medical services in high-rise buildings will also continue to increase. Furthermore, over 40% of homeowners over the age of 65 years reside in high-rise buildings.8 These older residents have higher risks for a number of serious medical conditions, including cardiac arrest. Cardiac arrests that occur in high-rise buildings pose unique challenges for 911- initiated first responders. Building access issues, elevator delays and extended distance from the location of the responding vehicle on scene to the patient can all contribute to longer times to patient contact and, ultimately, longer times to initiation of resuscitation. Previous research has shown that longer 911 response times result in decreased patient survival after cardiac arrest,9,10 but response times are traditionally measured from the time a call is received by the 911 dispatch centre to when the response vehicle arrives on scene. This measure fails to take into account the time required for 911-initiated first responders to make patient contact once they arrive on scene. This interval can contribute substantial delays to patient treatment, in some cases more than 4 minutes, and can account for up to 28% of the total time from the 911 call to arrival of the first responders at the patient’s side.1114There is a lack of literature describing the delay to patient contact during out-of-hospital cardiac arrests in high-rise buildings, where time-sensitive, life-saving interventions matter most. Furthermore, the effect on survival of vertical delay to patient contact is unknown. As the number of high-rise buildings continues to increase and as population density rises in major urban centres, is important to determine the effect of delays to patient care in high-rise buildings on survival after cardiac arrest and to examine potential barriers to patient care in this setting.The primary objective of this study was to compare the rate of survival to hospital discharge after out-of-hospital cardiac arrest at different vertical heights in residential buildings, specifically higher floors (≥ 3 floors) relative to lower floors (< 3 floors), with adjustment for standard Utstein variables.15The secondary objectives were to determine the delay to patient contact by 911-initiated first responders for cardiac arrests occurring on higher floors and to examine the use of automated external defibrillators by bystanders in private residences.  相似文献   

3.

Background

We developed and tested a new method, called the Evidence-based Practice for Improving Quality method, for continuous quality improvement.

Methods

We used cluster randomization to assign 6 neonatal intensive care units (ICUs) to reduce nosocomial infection (infection group) and 6 ICUs to reduce bronchopulmonary dysplasia (pulmonary group). We included all infants born at 32 or fewer weeks gestation. We collected baseline data for 1 year. Practice change interventions were implemented using rapid-change cycles for 2 years.

Results

The difference in incidence trends (slopes of trend lines) between the ICUs in the infection and pulmonary groups was − 0.0020 (95% confidence interval [CI] − 0.0007 to 0.0004) for nosocomial infection and − 0.0006 (95% CI − 0.0011 to − 0.0001) for bronchopulmonary dysplasia.

Interpretation

The results suggest that the Evidence-based Practice for Improving Quality method reduced bronchopulmonary dysplasia in the neonatal ICU and that it may reduce nosocomial infection.Although methods for continuous quality improvement have been used to improve outcomes,13 some, such as the National Institutes of Child Health and Human Development Quality Collaborative,4 have reported little or no effect in neonatal intensive care units (ICUs). These methods have been criticized for being based on intuition and anecdotes rather than on evidence.5 To address these concerns, researchers have developed methods aimed at improving the use of evidence in quality improvement. Tarnow-Mordi and colleagues,6 Sankaran and colleagues7 and others810 have used benchmarking instruments6,8,11 to show risk-adjusted variations in outcomes in neonatal ICUs. Synnes and colleagues12 reported that variations in the rates of intraventricular hemorrhage could be attributed to practice differences. MacNab and colleagues13 showed how multilevel modelling methods can be used to identify practice differences associated with variations in outcomes for targeted interventions and to quantify their attributable risks.Building on these results, we developed the Evidence-based Practice for Improving Quality method for continuous quality improvement. This method is based on 3 pillars: the use of evidence from published literature; the use of data from participating hospitals to identify hospital-specific practices for targeted intervention; and the use of a national network to share expertise. By selectively targeting hospital-specific practices for intervention, this method reduces the reliance on intuition and anecdotes that are associated with existing quality-improvement methods.Our objective was to evaluate the efficacy of the Evidence-based Practice for Improving Quality method by conducting a prospective cluster randomized controlled trial to reduce nosocomial infection and bronchopulmonary dysplasia among infants born at 32 or fewer weeks’ gestation and admitted to 12 Canadian Neonatal Network hospitals14 over a 36-month period. We hypothesized that the incidence of nosocomial infection would be reduced among infants in ICUs randomized to reduce infection but not among those in ICUs randomized to reduce bronchopulmonary dysplasia. We also hypothesized that the incidence of bronchopulmonary dysplasia would be reduced among infants in the ICUs randomized to reduce this outcome but not among those in ICUs randomized to reduce infections.  相似文献   

4.

Background

Survivors of out-of-hospital cardiac arrest are at high risk of recurrent arrests, many of which could be prevented with implantable cardioverter defibrillators (ICDs). We sought to determine the ICD insertion rate among survivors of out-of-hospital cardiac arrest and to determine factors associated with ICD implantation.

Methods

The Ontario Prehospital Advanced Life Support (OPALS) study is a prospective, multiphase, before–after study assessing the effectiveness of prehospital interventions for people experiencing cardiac arrest, trauma or respiratory arrest in 19 Ontario communities. We linked OPALS data describing survivors of cardiac arrest with data from all defibrillator implantation centres in Ontario.

Results

From January 1997 to April 2002, 454 patients in the OPALS study survived to hospital discharge after experiencing an out-of-hospital cardiac arrest. The mean age was 65 (standard deviation 14) years, 122 (26.9%) were women, 398 (87.7%) had a witnessed arrest, 372 (81.9%) had an initial rhythm of ventricular tachycardia or ventricular fibrillation (VT/VF), and 76 (16.7%) had asystole or another arrhythmia. The median cerebral performance category at discharge (range 1–5, 1 = normal) was 1. Only 58 (12.8%) of the 454 patients received an ICD. Patients with an initial rhythm of VT/VF were more likely than those with an initial rhythm of asystole or another rhythm to undergo device insertion (adjusted odds ratio [OR] 9.63, 95% confidence interval [CI] 1.31–71.50). Similarly, patients with a normal cerebral performance score were more likely than those with abnormal scores to undergo ICD insertion (adjusted OR 12.52, 95% CI 1.74–92.12).

Interpretation

A minority of patients who survived cardiac arrest underwent ICD insertion. It is unclear whether this low usage rate reflects referral bias, selection bias by electrophysiologists, supply constraint or patient preference.People who survive out-of-hospital cardiac arrest have an increased risk of recurrent arrest of 18%–20% in the first year.1,2 Three large randomized studies evaluated the use of implantable cardioverter defibrillators (ICDs) versus antiarrhythmic drugs in survivors of out-of-hospital cardiac arrest.3,4,5 The largest of the 3 studies involved 1016 patients and found a 39% relative risk reduction in mortality in the ICD group.3 The 2 smaller studies both reported nonsignificant reductions in mortality in the ICD group.4,5 Two recent meta-analyses showed that the use of ICDs was associated with significant and important increases in survival among cardiac arrest survivors: all-cause mortality was reduced by 23%–28% with their use for secondary prevention, and the rate of sudden cardiac death was reduced by 50% in both meta-analyses.6,7Guidelines from several national and international societies recommend insertion of ICDs in all survivors of cardiac arrest without a reversible cause.8,9 Despite advances in ICD insertion and technology, studies to date suggest that the utilization rate is low, at least in some settings.10,11 Several factors, including patient preference, physician referral, availability and cost, may contribute to the underutilization of ICDs.The Ontario Prehospital Advanced Life Support Study (OPALS)12,13 is a multiphase before–after study designed to systematically evaluate the effectiveness of various prehospital interventions for people experiencing cardiac arrest, trauma or respiratory arrest. As an extension of the OPALS study, we sought to determine the rate of ICD insertion among survivors of cardiac arrest, as well as the factors associated with ICD implantation.  相似文献   

5.

Background:

Persistent postoperative pain continues to be an underrecognized complication. We examined the prevalence of and risk factors for this type of pain after cardiac surgery.

Methods:

We enrolled patients scheduled for coronary artery bypass grafting or valve replacement, or both, from Feb. 8, 2005, to Sept. 1, 2009. Validated measures were used to assess (a) preoperative anxiety and depression, tendency to catastrophize in the face of pain, health-related quality of life and presence of persistent pain; (b) pain intensity and interference in the first postoperative week; and (c) presence and intensity of persistent postoperative pain at 3, 6, 12 and 24 months after surgery. The primary outcome was the presence of persistent postoperative pain during 24 months of follow-up.

Results:

A total of 1247 patients completed the preoperative assessment. Follow-up retention rates at 3 and 24 months were 84% and 78%, respectively. The prevalence of persistent postoperative pain decreased significantly over time, from 40.1% at 3 months to 22.1% at 6 months, 16.5% at 12 months and 9.5% at 24 months; the pain was rated as moderate to severe in 3.6% at 24 months. Acute postoperative pain predicted both the presence and severity of persistent postoperative pain. The more intense the pain during the first week after surgery and the more it interfered with functioning, the more likely the patients were to report persistent postoperative pain. Pre-existing persistent pain and increased preoperative anxiety also predicted the presence of persistent postoperative pain.

Interpretation:

Persistent postoperative pain of nonanginal origin after cardiac surgery affected a substantial proportion of the study population. Future research is needed to determine whether interventions to modify certain risk factors, such as preoperative anxiety and the severity of pain before and immediately after surgery, may help to minimize or prevent persistent postoperative pain.Postoperative pain that persists beyond the normal time for tissue healing (> 3 mo) is increasingly recognized as an important complication after various types of surgery and can have serious consequences on patients’ daily living.13 Cardiac surgeries, such as coronary artery bypass grafting (CABG) and valve replacement, rank among the most frequently performed interventions worldwide.4 They aim to improve survival and quality of life by reducing symptoms, including anginal pain. However, persistent postoperative pain of nonanginal origin has been reported in 7% to 60% of patients following these surgeries.523 Such variability is common in other types of major surgery and is due mainly to differences in the definition of persistent postoperative pain, study design, data collection methods and duration of follow-up.13,24Few prospective cohort studies have examined the exact time course of persistent postoperative pain after cardiac surgery, and follow-up has always been limited to a year or less.9,14,25 Factors that put patients at risk of this type of problem are poorly understood.26 Studies have reported inconsistent results regarding the contribution of age, sex, body mass index, preoperative angina, surgical technique, grafting site, postoperative complications or level of opioid consumption after surgery.57,9,13,14,1619,2123,25,27 Only 1 study investigated the role of chronic nonanginal pain before surgery as a contributing factor;21 5 others prospectively assessed the association between persistent postoperative pain and acute pain intensity in the first postoperative week but reported conflicting results.13,14,21,22,25 All of the above studies were carried out in a single hospital and included relatively small samples. None of the studies examined the contribution of psychological factors such as levels of anxiety and depression before cardiac surgery, although these factors have been shown to influence acute or persistent postoperative pain in other types of surgery.1,24,28,29We conducted a prospective multicentre cohort study (the CARD-PAIN study) to determine the prevalence of persistent postoperative pain of nonanginal origin up to 24 months after cardiac surgery and to identify risk factors for the presence and severity of the condition.  相似文献   

6.

Background:

Polymyalgia rheumatica is one of the most common inflammatory rheumatologic conditions in older adults. Other inflammatory rheumatologic disorders are associated with an excess risk of vascular disease. We investigated whether polymyalgia rheumatica is associated with an increased risk of vascular events.

Methods:

We used the General Practice Research Database to identify patients with a diagnosis of incident polymyalgia rheumatica between Jan. 1, 1987, and Dec. 31, 1999. Patients were matched by age, sex and practice with up to 5 patients without polymyalgia rheumatica. Patients were followed until their first vascular event (cardiovascular, cerebrovascular, peripheral vascular) or the end of available records (May 2011). All participants were free of vascular disease before the diagnosis of polymyalgia rheumatica (or matched date). We used Cox regression models to compare time to first vascular event in patients with and without polymyalgia rheumatica.

Results:

A total of 3249 patients with polymyalgia rheumatica and 12 735 patients without were included in the final sample. Over a median follow-up period of 7.8 (interquartile range 3.3–12.4) years, the rate of vascular events was higher among patients with polymyalgia rheumatica than among those without (36.1 v. 12.2 per 1000 person-years; adjusted hazard ratio 2.6, 95% confidence interval 2.4–2.9). The increased risk of a vascular event was similar for each vascular disease end point. The magnitude of risk was higher in early disease and in patients younger than 60 years at diagnosis.

Interpretation:

Patients with polymyalgia rheumatica have an increased risk of vascular events. This risk is greatest in the youngest age groups. As with other forms of inflammatory arthritis, patients with polymyalgia rheumatica should have their vascular risk factors identified and actively managed to reduce this excess risk.Inflammatory rheumatologic disorders such as rheumatoid arthritis,1,2 systemic lupus erythematosus,2,3 gout,4 psoriatic arthritis2,5 and ankylosing spondylitis2,6 are associated with an increased risk of vascular disease, especially cardiovascular disease, leading to substantial morbidity and premature death.26 Recognition of this excess vascular risk has led to management guidelines advocating screening for and management of vascular risk factors.79Polymyalgia rheumatica is one of the most common inflammatory rheumatologic conditions in older adults,10 with a lifetime risk of 2.4% for women and 1.7% for men.11 To date, evidence regarding the risk of vascular disease in patients with polymyalgia rheumatica is unclear. There are a number of biologically plausible mechanisms between polymyalgia rheumatica and vascular disease. These include the inflammatory burden of the disease,12,13 the association of the disease with giant cell arteritis (causing an inflammatory vasculopathy, which may lead to subclinical arteritis, stenosis or aneurysms),14 and the adverse effects of long-term corticosteroid treatment (e.g., diabetes, hypertension and dyslipidemia).15,16 Paradoxically, however, use of corticosteroids in patients with polymyalgia rheumatica may actually decrease vascular risk by controlling inflammation.17 A recent systematic review concluded that although some evidence exists to support an association between vascular disease and polymyalgia rheumatica,18 the existing literature presents conflicting results, with some studies reporting an excess risk of vascular disease19,20 and vascular death,21,22 and others reporting no association.2326 Most current studies are limited by poor methodologic quality and small samples, and are based on secondary care cohorts, who may have more severe disease, yet most patients with polymyalgia rheumatica receive treatment exclusively in primary care.27The General Practice Research Database (GPRD), based in the United Kingdom, is a large electronic system for primary care records. It has been used as a data source for previous studies,28 including studies on the association of inflammatory conditions with vascular disease29 and on the epidemiology of polymyalgia rheumatica in the UK.30 The aim of the current study was to examine the association between polymyalgia rheumatica and vascular disease in a primary care population.  相似文献   

7.

Background:

Acute kidney injury is a serious complication of elective major surgery. Acute dialysis is used to support life in the most severe cases. We examined whether rates and outcomes of acute dialysis after elective major surgery have changed over time.

Methods:

We used data from Ontario’s universal health care databases to study all consecutive patients who had elective major surgery at 118 hospitals between 1995 and 2009. Our primary outcomes were acute dialysis within 14 days of surgery, death within 90 days of surgery and chronic dialysis for patients who did not recover kidney function.

Results:

A total of 552 672 patients underwent elective major surgery during the study period, 2231 of whom received acute dialysis. The incidence of acute dialysis increased steadily from 0.2% in 1995 (95% confidence interval [CI] 0.15–0.2) to 0.6% in 2009 (95% CI 0.6–0.7). This increase was primarily in cardiac and vascular surgeries. Among patients who received acute dialysis, 937 died within 90 days of surgery (42.0%, 95% CI 40.0–44.1), with no change in 90-day survival over time. Among the 1294 patients who received acute dialysis and survived beyond 90 days, 352 required chronic dialysis (27.2%, 95% CI 24.8–29.7), with no change over time.

Interpretation:

The use of acute dialysis after cardiac and vascular surgery has increased substantially since 1995. Studies focusing on interventions to better prevent and treat perioperative acute kidney injury are needed.More than 230 million elective major surgeries are done annually worldwide.1 Acute kidney injury is a serious complication of major surgery. It represents a sudden loss of kidney function that affects morbidity, mortality and health care costs.2 Dialysis is used for the most severe forms of acute kidney injury. In the nonsurgical setting, the incidence of acute dialysis has steadily increased over the last 15 years, and patients are now more likely to survive to discharge from hospital.35 Similarly, in the surgical setting, the incidence of acute dialysis appears to be increasing over time,610 with declining inhospital mortality.8,10,11Although previous studies have improved our understanding of the epidemiology of acute dialysis in the surgical setting, several questions remain. Many previous studies were conducted at a single centre, thereby limiting their generalizability.6,1214 Most multicentre studies were conducted in the nonsurgical setting and used diagnostic codes for acute kidney injury not requiring dialysis; however, these codes can be inaccurate.15,16 In contrast, a procedure such as dialysis is easily determined. The incidence of acute dialysis after elective surgery is of particular interest given the need for surgical consent, the severe nature of the event and the potential for mitigation. The need for chronic dialysis among patients who do not recover renal function after surgery has been poorly studied, yet this condition has a major affect on patient survival and quality of life.17 For these reasons, we studied secular trends in acute dialysis after elective major surgery, focusing on incidence, 90-day mortality and need for chronic dialysis.  相似文献   

8.

Background

Daily evaluation of multiple organ dysfunction syndrome has been performed in critically ill adults. We evaluated the clinical course of multiple organ dysfunction over time in critically ill children using the Pediatric Logistic Organ Dysfunction (PELOD) score and determined the optimal days for measuring scores.

Methods

We prospectively measured daily PELOD scores and calculated the change in scores over time for 1806 consecutive patients admitted to seven pediatric intensive care units (PICUs) between September 1998 and February 2000. To study the relationship between daily scores and mortality in the PICU, we evaluated changes in daily scores during the first four days; the mean rate of change in scores during the entire PICU stay between survivors and nonsurvivors; and Cox survival analyses using a change in PELOD score as a time-dependent covariate to determine the optimal days for measuring daily scores.

Results

The overall mortality among the 1806 patients was 6.4%. A high PELOD score (≥ 20 points) on day 1 was associated with an odds ratio (OR) for death of 40.7 (95% confidence interval [CI] 20.3–81.4); a medium score (10–19 points) on day 1 was associated with an OR for death of 4.2 (95% CI 2.0–8.7). Mortality was 50% when a high score on day 1 increased on day 2. The course of daily PELOD scores differed between survivors and nonsurvivors. A set of seven days (days 1, 2, 5, 8, 12, 16 and 18) was identified as the optimal period for measurement of daily PELOD scores.

Interpretation

PELOD scores indicating a worsening condition or no improvement over time were indicators of a poor prognosis in the PICU. A set of seven days for measurement of the PELOD score during the PICU stay provided optimal information on the progression of multiple-organ dysfunction syndrome in critically ill children.Almost all patients in intensive care units (ICUs) have some organ dysfunction.14 Adult and pediatric studies have shown that mortality increases with the number of organs involved.2,4,5 Thus, multiple-organ dysfunction syndrome (dysfunction involving two or more organs) has been viewed as the inexorable pathway to death.6 Primary multiple-organ dysfunction syndrome (present at admission or occurring within the first week after admission to the ICU) accounts for 88% of children with the syndrome; secondary multiple-organ dysfunction syndrome is less common (12%) but is associated with higher morbidity and mortality.7Organ dysfunction scores were first developed for use in critically ill adults to describe and quantify the severity of organ dysfunction, not to predict mortality. Two scores have been proposed for critically ill children: the Pediatric Logistic Organ Dysfunction (PELOD) score and the Pediatric Multiple Organ Dysfunction Score (P-MODS).810 These scores quantify organ dysfunction precisely and can be used as indicators of the severity of illness throughout the clinical course. They can also be used as baseline and outcome measures in clinical studies conducted in ICUs11,12 and pediatric ICUs (PICUs).13The PELOD score calculated with data collected over the entire PICU stay has been validated (using the most abnormal value of each variable during the entire PICU stay).10 However, the PELOD score over the entire PICU stay cannot be calculated before discharge from the unit; therefore, it cannot be used to characterize and follow the severity of organ dysfunction on a daily basis. Measurements repeated daily may provide more useful information.14 The optimal period for measuring daily scores for multiple organ dysfunction in adults has been studied.1517 Indeed, trends in the Sequential Organ Failure Assessment score over the first 48 hours in the ICU was found to be a sensitive indicator of outcome, with decreasing scores associated with a decrease in mortality from 50% to 27%.17 Similar data for critically ill children are lacking.We conducted this study to describe the clinical course of multiple organ dysfunction over time as measured by the daily PELOD score. Because the time and effort necessary to ensure accurate daily assessments and data entry can be substantial,18 we also aimed to determine the optimal days for measuring daily scores during the PICU stay.  相似文献   

9.

Background

Patients exposed to low-dose ionizing radiation from cardiac imaging and therapeutic procedures after acute myocardial infarction may be at increased risk of cancer.

Methods

Using an administrative database, we selected a cohort of patients who had an acute myocardial infarction between April 1996 and March 2006 and no history of cancer. We documented all cardiac imaging and therapeutic procedures involving low-dose ionizing radiation. The primary outcome was risk of cancer. Statistical analyses were performed using a time-dependent Cox model adjusted for age, sex and exposure to low-dose ionizing radiation from noncardiac imaging to account for work-up of cancer.

Results

Of the 82 861 patients included in the cohort, 77% underwent at least one cardiac imaging or therapeutic procedure involving low-dose ionizing radiation in the first year after acute myocardial infarction. The cumulative exposure to radiation from cardiac procedures was 5.3 milliSieverts (mSv) per patient-year, of which 84% occurred during the first year after acute myocardial infarction. A total of 12 020 incident cancers were diagnosed during the follow-up period. There was a dose-dependent relation between exposure to radiation from cardiac procedures and subsequent risk of cancer. For every 10 mSv of low-dose ionizing radiation, there was a 3% increase in the risk of age- and sex-adjusted cancer over a mean follow-up period of five years (hazard ratio 1.003 per milliSievert, 95% confidence interval 1.002–1.004).

Interpretation

Exposure to low-dose ionizing radiation from cardiac imaging and therapeutic procedures after acute myocardial infarction is associated with an increased risk of cancer.Studies involving atomic bomb survivors have documented an increased incidence of malignant neoplasm related to the radiation exposure.14 Survivors who were farther from the epicentre of the blast had a lower incidence of cancer, whereas those who were closer had a higher incidence.5 Similar risk estimates have been reported among workers in nuclear plants.6 However, little is known about the relation between exposure to low-dose ionizing radiation from medical procedures and the risk of cancer.In the past six decades since the atomic bomb explosions, most individuals worldwide have had minimal exposure to ionizing radiation. However, the recent increase in the use of medical imaging and therapeutic procedures involving low-dose ionizing radiation has led to a growing concern that individual patients may be at increased risk of cancer.712 Whereas strict regulatory control is placed on occupational exposure at work sites, no such control exists among patients who are exposed to such radiation.1316It is not only the frequency of these procedures that is increasing. Newer types of imaging procedures are using higher doses of low-dose ionizing radiation than those used with more traditional procedures.8,11 Among patients being evaluated for coronary artery disease, for example, coronary computed tomography is increasingly being used. This test may be used in addition to other tests such as nuclear scans, coronary angiography and percutaneous coronary intervention, each of which exposes the patient to low-dose ionizing radiation.12,1721 Imaging procedures provide information that can be used to predict the prognosis of patients with coronary artery disease. Since such predictions do not necessarily translate into better clinical outcomes,8,12 the prognostic value obtained from imaging procedures using low-dose ionizing radiation needs to be balanced against the potential for risk.Authors of several studies have estimated that the risk of cancer is not negligible among patients exposed to low-dose ionizing radiation.2227 To our knowledge, none of these studies directly linked cumulative exposure and cancer risk. We examined a cohort of patients who had acute myocardial infarction and measured the association between low-dose ionizing radiation from cardiac imaging and therapeutic procedures and the risk of cancer.  相似文献   

10.

Background:

The gut microbiota is essential to human health throughout life, yet the acquisition and development of this microbial community during infancy remains poorly understood. Meanwhile, there is increasing concern over rising rates of cesarean delivery and insufficient exclusive breastfeeding of infants in developed countries. In this article, we characterize the gut microbiota of healthy Canadian infants and describe the influence of cesarean delivery and formula feeding.

Methods:

We included a subset of 24 term infants from the Canadian Healthy Infant Longitudinal Development (CHILD) birth cohort. Mode of delivery was obtained from medical records, and mothers were asked to report on infant diet and medication use. Fecal samples were collected at 4 months of age, and we characterized the microbiota composition using high-throughput DNA sequencing.

Results:

We observed high variability in the profiles of fecal microbiota among the infants. The profiles were generally dominated by Actinobacteria (mainly the genus Bifidobacterium) and Firmicutes (with diverse representation from numerous genera). Compared with breastfed infants, formula-fed infants had increased richness of species, with overrepresentation of Clostridium difficile. Escherichia–Shigella and Bacteroides species were underrepresented in infants born by cesarean delivery. Infants born by elective cesarean delivery had particularly low bacterial richness and diversity.

Interpretation:

These findings advance our understanding of the gut microbiota in healthy infants. They also provide new evidence for the effects of delivery mode and infant diet as determinants of this essential microbial community in early life.The human body harbours trillions of microbes, known collectively as the “human microbiome.” By far the highest density of commensal bacteria is found in the digestive tract, where resident microbes outnumber host cells by at least 10 to 1. Gut bacteria play a fundamental role in human health by promoting intestinal homeostasis, stimulating development of the immune system, providing protection against pathogens, and contributing to the processing of nutrients and harvesting of energy.1,2 The disruption of the gut microbiota has been linked to an increasing number of diseases, including inflammatory bowel disease, necrotizing enterocolitis, diabetes, obesity, cancer, allergies and asthma.1 Despite this evidence and a growing appreciation for the integral role of the gut microbiota in lifelong health, relatively little is known about the acquisition and development of this complex microbial community during infancy.3Two of the best-studied determinants of the gut microbiota during infancy are mode of delivery and exposure to breast milk.4,5 Cesarean delivery perturbs normal colonization of the infant gut by preventing exposure to maternal microbes, whereas breastfeeding promotes a “healthy” gut microbiota by providing selective metabolic substrates for beneficial bacteria.3,5 Despite recommendations from the World Health Organization,6 the rate of cesarean delivery has continued to rise in developed countries and rates of breastfeeding decrease substantially within the first few months of life.7,8 In Canada, more than 1 in 4 newborns are born by cesarean delivery, and less than 15% of infants are exclusively breastfed for the recommended duration of 6 months.9,10 In some parts of the world, elective cesarean deliveries are performed by maternal request, often because of apprehension about pain during childbirth, and sometimes for patient–physician convenience.11The potential long-term consequences of decisions regarding mode of delivery and infant diet are not to be underestimated. Infants born by cesarean delivery are at increased risk of asthma, obesity and type 1 diabetes,12 whereas breastfeeding is variably protective against these and other disorders.13 These long-term health consequences may be partially attributable to disruption of the gut microbiota.12,14Historically, the gut microbiota has been studied with the use of culture-based methodologies to examine individual organisms. However, up to 80% of intestinal microbes cannot be grown in culture.3,15 New technology using culture-independent DNA sequencing enables comprehensive detection of intestinal microbes and permits simultaneous characterization of entire microbial communities. Multinational consortia have been established to characterize the “normal” adult microbiome using these exciting new methods;16 however, these methods have been underused in infant studies. Because early colonization may have long-lasting effects on health, infant studies are vital.3,4 Among the few studies of infant gut microbiota using DNA sequencing, most were conducted in restricted populations, such as infants delivered vaginally,17 infants born by cesarean delivery who were formula-fed18 or preterm infants with necrotizing enterocolitis.19Thus, the gut microbiota is essential to human health, yet the acquisition and development of this microbial community during infancy remains poorly understood.3 In the current study, we address this gap in knowledge using new sequencing technology and detailed exposure assessments20 of healthy Canadian infants selected from a national birth cohort to provide representative, comprehensive profiles of gut microbiota according to mode of delivery and infant diet.  相似文献   

11.

Background

The pathogenesis of appendicitis is unclear. We evaluated whether exposure to air pollution was associated with an increased incidence of appendicitis.

Methods

We identified 5191 adults who had been admitted to hospital with appendicitis between Apr. 1, 1999, and Dec. 31, 2006. The air pollutants studied were ozone, nitrogen dioxide, sulfur dioxide, carbon monoxide, and suspended particulate matter of less than 10 μ and less than 2.5 μ in diameter. We estimated the odds of appendicitis relative to short-term increases in concentrations of selected pollutants, alone and in combination, after controlling for temperature and relative humidity as well as the effects of age, sex and season.

Results

An increase in the interquartile range of the 5-day average of ozone was associated with appendicitis (odds ratio [OR] 1.14, 95% confidence interval [CI] 1.03–1.25). In summer (July–August), the effects were most pronounced for ozone (OR 1.32, 95% CI 1.10–1.57), sulfur dioxide (OR 1.30, 95% CI 1.03–1.63), nitrogen dioxide (OR 1.76, 95% CI 1.20–2.58), carbon monoxide (OR 1.35, 95% CI 1.01–1.80) and particulate matter less than 10 μ in diameter (OR 1.20, 95% CI 1.05–1.38). We observed a significant effect of the air pollutants in the summer months among men but not among women (e.g., OR for increase in the 5-day average of nitrogen dioxide 2.05, 95% CI 1.21–3.47, among men and 1.48, 95% CI 0.85–2.59, among women). The double-pollutant model of exposure to ozone and nitrogen dioxide in the summer months was associated with attenuation of the effects of ozone (OR 1.22, 95% CI 1.01–1.48) and nitrogen dioxide (OR 1.48, 95% CI 0.97–2.24).

Interpretation

Our findings suggest that some cases of appendicitis may be triggered by short-term exposure to air pollution. If these findings are confirmed, measures to improve air quality may help to decrease rates of appendicitis.Appendicitis was introduced into the medical vernacular in 1886.1 Since then, the prevailing theory of its pathogenesis implicated an obstruction of the appendiceal orifice by a fecalith or lymphoid hyperplasia.2 However, this notion does not completely account for variations in incidence observed by age,3,4 sex,3,4 ethnic background,3,4 family history,5 temporal–spatial clustering6 and seasonality,3,4 nor does it completely explain the trends in incidence of appendicitis in developed and developing nations.3,7,8The incidence of appendicitis increased dramatically in industrialized nations in the 19th century and in the early part of the 20th century.1 Without explanation, it decreased in the middle and latter part of the 20th century.3 The decrease coincided with legislation to improve air quality. For example, after the United States Clean Air Act was passed in 1970,9 the incidence of appendicitis decreased by 14.6% from 1970 to 1984.3 Likewise, a 36% drop in incidence was reported in the United Kingdom between 1975 and 199410 after legislation was passed in 1956 and 1968 to improve air quality and in the 1970s to control industrial sources of air pollution. Furthermore, appendicitis is less common in developing nations; however, as these countries become more industrialized, the incidence of appendicitis has been increasing.7Air pollution is known to be a risk factor for multiple conditions, to exacerbate disease states and to increase all-cause mortality.11 It has a direct effect on pulmonary diseases such as asthma11 and on nonpulmonary diseases including myocardial infarction, stroke and cancer.1113 Inflammation induced by exposure to air pollution contributes to some adverse health effects.1417 Similar to the effects of air pollution, a proinflammatory response has been associated with appendicitis.1820We conducted a case–crossover study involving a population-based cohort of patients admitted to hospital with appendicitis to determine whether short-term increases in concentrations of selected air pollutants were associated with hospital admission because of appendicitis.  相似文献   

12.
Background:Otitis media with effusion is a common problem that lacks an evidence-based nonsurgical treatment option. We assessed the clinical effectiveness of treatment with a nasal balloon device in a primary care setting.Methods:We conducted an open, pragmatic randomized controlled trial set in 43 family practices in the United Kingdom. Children aged 4–11 years with a recent history of ear symptoms and otitis media with effusion in 1 or both ears, confirmed by tympanometry, were allocated to receive either autoinflation 3 times daily for 1–3 months plus usual care or usual care alone. Clearance of middle-ear fluid at 1 and 3 months was assessed by experts masked to allocation.Results:Of 320 children enrolled, those receiving autoinflation were more likely than controls to have normal tympanograms at 1 month (47.3% [62/131] v. 35.6% [47/132]; adjusted relative risk [RR] 1.36, 95% confidence interval [CI] 0.99 to 1.88) and at 3 months (49.6% [62/125] v. 38.3% [46/120]; adjusted RR 1.37, 95% CI 1.03 to 1.83; number needed to treat = 9). Autoinflation produced greater improvements in ear-related quality of life (adjusted between-group difference in change from baseline in OMQ-14 [an ear-related measure of quality of life] score −0.42, 95% CI −0.63 to −0.22). Compliance was 89% at 1 month and 80% at 3 months. Adverse events were mild, infrequent and comparable between groups.Interpretation:Autoinflation in children aged 4–11 years with otitis media with effusion is feasible in primary care and effective both in clearing effusions and improving symptoms and ear-related child and parent quality of life. Trial registration: ISRCTN, No. 55208702.Otitis media with effusion, also known as glue ear, is an accumulation of fluid in the middle ear, without symptoms or signs of an acute ear infection. It is often associated with viral infection.13 The prevalence rises to 46% in children aged 4–5 years,4 when hearing difficulty, other ear-related symptoms and broader developmental concerns often bring the condition to medical attention.3,5,6 Middle-ear fluid is associated with conductive hearing losses of about 15–45 dB HL.7 Resolution is clinically unpredictable,810 with about a third of cases showing recurrence.11 In the United Kingdom, about 200 000 children with the condition are seen annually in primary care.12,13 Research suggests some children seen in primary care are as badly affected as those seen in hospital.7,9,14,15 In the United States, there were 2.2 million diagnosed episodes in 2004, costing an estimated $4.0 billion.16 Rates of ventilation tube surgery show variability between countries,1719 with a declining trend in the UK.20Initial clinical management consists of reasonable temporizing or delay before considering surgery.13 Unfortunately, all available medical treatments for otitis media with effusion such as antibiotics, antihistamines, decongestants and intranasal steroids are ineffective and have unwanted effects, and therefore cannot be recommended.2123 Not only are antibiotics ineffective, but resistance to them poses a major threat to public health.24,25 Although surgery is effective for a carefully selected minority,13,26,27 a simple low-cost, nonsurgical treatment option could benefit a much larger group of symptomatic children, with the purpose of addressing legitimate clinical concerns without incurring excessive delays.Autoinflation using a nasal balloon device is a low-cost intervention with the potential to be used more widely in primary care, but current evidence of its effectiveness is limited to several small hospital-based trials28 that found a higher rate of tympanometric resolution of ear fluid at 1 month.2931 Evidence of feasibility and effectiveness of autoinflation to inform wider clinical use is lacking.13,28 Thus we report here the findings of a large pragmatic trial of the clinical effectiveness of nasal balloon autoinflation in a spectrum of children with clinically confirmed otitis media with effusion identified from primary care.  相似文献   

13.
Schultz AS  Finegan B  Nykiforuk CI  Kvern MA 《CMAJ》2011,183(18):E1334-E1344

Background:

Many hospitals have adopted smoke-free policies on their property. We examined the consequences of such polices at two Canadian tertiary acute-care hospitals.

Methods:

We conducted a qualitative study using ethnographic techniques over a six-month period. Participants (n = 186) shared their perspectives on and experiences with tobacco dependence and managing the use of tobacco, as well as their impressions of the smoke-free policy. We interviewed inpatients individually from eight wards (n = 82), key policy-makers (n = 9) and support staff (n = 14) and held 16 focus groups with health care providers and ward staff (n = 81). We also reviewed ward documents relating to tobacco dependence and looked at smoking-related activities on hospital property.

Results:

Noncompliance with the policy and exposure to secondhand smoke were ongoing concerns. Peoples’ impressions of the use of tobacco varied, including divergent opinions as to whether such use was a bad habit or an addiction. Treatment for tobacco dependence and the management of symptoms of withdrawal were offered inconsistently. Participants voiced concerns over patient safety and leaving the ward to smoke.

Interpretation:

Policies mandating smoke-free hospital property have important consequences beyond noncompliance, including concerns over patient safety and disruptions to care. Without adequately available and accessible support for withdrawal from tobacco, patients will continue to face personal risk when they leave hospital property to smoke.Canadian cities and provinces have passed smoking bans with the goal of reducing people’s exposure to secondhand smoke in workplaces, public spaces and on the property adjacent to public buildings.1,2 In response, Canadian health authorities and hospitals began implementing policies mandating smoke-free hospital property, with the goals of reducing the exposure of workers, patients and visitors to tobacco smoke while delivering a public health message about the dangers of smoking.25 An additional anticipated outcome was the reduced use of tobacco among patients and staff. The impetuses for adopting smoke-free policies include public support for such legislation and the potential for litigation for exposure to second-hand smoke.2,4Tobacco use is a modifiable risk factor associated with a variety of cancers, cardiovascular diseases and respiratory conditions.611 Patients in hospital who use tobacco tend to have more surgical complications and exacerbations of acute and chronic health conditions than patients who do not use tobacco.611 Any policy aimed at reducing exposure to tobacco in hospitals is well supported by evidence, as is the integration of interventions targetting tobacco dependence.12 Unfortunately, most of the nearly five million Canadians who smoke will receive suboptimal treatment,13 as the routine provision of interventions for tobacco dependence in hospital settings is not a practice norm.1416 In smoke-free hospitals, two studies suggest minimal support is offered for withdrawal, 17,18 and one reports an increased use of nicotine-replacement therapy after the implementation of the smoke-free policy.19Assessments of the effectiveness of smoke-free policies for hospital property tend to focus on noncompliance and related issues of enforcement.17,20,21 Although evidence of noncompliance and litter on hospital property2,17,20 implies ongoing exposure to tobacco smoke, half of the participating hospital sites in one study reported less exposure to tobacco smoke within hospital buildings and on the property.18 In addition, there is evidence to suggest some decline in smoking among staff.18,19,21,22We sought to determine the consequences of policies mandating smoke-free hospital property in two Canadian acute-care hospitals by eliciting lived experiences of the people faced with enacting the policies: patients and health care providers. In addition, we elicited stories from hospital support staff and administrators regarding the policies.  相似文献   

14.
Background:Rates of imaging for low-back pain are high and are associated with increased health care costs and radiation exposure as well as potentially poorer patient outcomes. We conducted a systematic review to investigate the effectiveness of interventions aimed at reducing the use of imaging for low-back pain.Methods:We searched MEDLINE, Embase, CINAHL and the Cochrane Central Register of Controlled Trials from the earliest records to June 23, 2014. We included randomized controlled trials, controlled clinical trials and interrupted time series studies that assessed interventions designed to reduce the use of imaging in any clinical setting, including primary, emergency and specialist care. Two independent reviewers extracted data and assessed risk of bias. We used raw data on imaging rates to calculate summary statistics. Study heterogeneity prevented meta-analysis.Results:A total of 8500 records were identified through the literature search. Of the 54 potentially eligible studies reviewed in full, 7 were included in our review. Clinical decision support involving a modified referral form in a hospital setting reduced imaging by 36.8% (95% confidence interval [CI] 33.2% to 40.5%). Targeted reminders to primary care physicians of appropriate indications for imaging reduced referrals for imaging by 22.5% (95% CI 8.4% to 36.8%). Interventions that used practitioner audits and feedback, practitioner education or guideline dissemination did not significantly reduce imaging rates. Lack of power within some of the included studies resulted in lack of statistical significance despite potentially clinically important effects.Interpretation:Clinical decision support in a hospital setting and targeted reminders to primary care doctors were effective interventions in reducing the use of imaging for low-back pain. These are potentially low-cost interventions that would substantially decrease medical expenditures associated with the management of low-back pain.Current evidence-based clinical practice guidelines recommend against the routine use of imaging in patients presenting with low-back pain.13 Despite this, imaging rates remain high,4,5 which indicates poor concordance with these guidelines.6,7Unnecessary imaging for low-back pain has been associated with poorer patient outcomes, increased radiation exposure and higher health care costs.8 No short- or long-term clinical benefits have been shown with routine imaging of the low back, and the diagnostic value of incidental imaging findings remains uncertain.912 A 2008 systematic review found that imaging accounted for 7% of direct costs associated with low-back pain, which in 1998 translated to more than US$6 billion in the United States and £114 million in the United Kingdom.13 Current costs are likely to be substantially higher, with an estimated 65% increase in spine-related expenditures between 1997 and 2005.14Various interventions have been tried for reducing imaging rates among people with low-back pain. These include strategies targeted at the practitioner such as guideline dissemination,1517 education workshops,18,19 audit and feedback of imaging use,7,20,21 ongoing reminders7 and clinical decision support.2224 It is unclear which, if any, of these strategies are effective.25 We conducted a systematic review to investigate the effectiveness of interventions designed to reduce imaging rates for the management of low-back pain.  相似文献   

15.
The erythropoietin receptor (EpoR) was discovered and described in red blood cells (RBCs), stimulating its proliferation and survival. The target in humans for EpoR agonists drugs appears clear—to treat anemia. However, there is evidence of the pleitropic actions of erythropoietin (Epo). For that reason, rhEpo therapy was suggested as a reliable approach for treating a broad range of pathologies, including heart and cardiovascular diseases, neurodegenerative disorders (Parkinson’s and Alzheimer’s disease), spinal cord injury, stroke, diabetic retinopathy and rare diseases (Friedreich ataxia). Unfortunately, the side effects of rhEpo are also evident. A new generation of nonhematopoietic EpoR agonists drugs (asialoEpo, Cepo and ARA 290) have been investigated and further developed. These EpoR agonists, without the erythropoietic activity of Epo, while preserving its tissue-protective properties, will provide better outcomes in ongoing clinical trials. Nonhematopoietic EpoR agonists represent safer and more effective surrogates for the treatment of several diseases such as brain and peripheral nerve injury, diabetic complications, renal ischemia, rare diseases, myocardial infarction, chronic heart disease and others.In principle, the erythropoietin receptor (EpoR) was discovered and described in red blood cell (RBC) progenitors, stimulating its proliferation and survival. Erythropoietin (Epo) is mainly synthesized in fetal liver and adult kidneys (13). Therefore, it was hypothesized that Epo act exclusively on erythroid progenitor cells. Accordingly, the target in humans for EpoR agonists drugs (such as recombinant erythropoietin [rhEpo], in general, called erythropoiesis-simulating agents) appears clear (that is, to treat anemia). However, evidence of a kaleidoscope of pleitropic actions of Epo has been provided (4,5). The Epo/EpoR axis research involved an initial journey from laboratory basic research to clinical therapeutics. However, as a consequence of clinical observations, basic research on Epo/EpoR comes back to expand its clinical therapeutic applicability.Although kidney and liver have long been considered the major sources of synthesis, Epo mRNA expression has also been detected in the brain (neurons and glial cells), lung, heart, bone marrow, spleen, hair follicles, reproductive tract and osteoblasts (617). Accordingly, EpoR was detected in other cells, such as neurons, astrocytes, microglia, immune cells, cancer cell lines, endothelial cells, bone marrow stromal cells and cells of heart, reproductive system, gastrointestinal tract, kidney, pancreas and skeletal muscle (1827). Conversely, Sinclair et al.(28) reported data questioning the presence or function of EpoR on nonhematopoietic cells (endothelial, neuronal and cardiac cells), suggesting that further studies are needed to confirm the diversity of EpoR. Elliott et al.(29) also showed that EpoR is virtually undetectable in human renal cells and other tissues with no detectable EpoR on cell surfaces. These results have raised doubts about the preclinical basis for studies exploring pleiotropic actions of rhEpo (30).For the above-mentioned data, a return to basic research studies has become necessary, and many studies in animal models have been initiated or have already been performed. The effect of rhEpo administration on angiogenesis, myogenesis, shift in muscle fiber types and oxidative enzyme activities in skeletal muscle (4,31), cardiac muscle mitochondrial biogenesis (32), cognitive effects (31), antiapoptotic and antiinflammatory actions (3337) and plasma glucose concentrations (38) has been extensively studied. Neuro- and cardioprotection properties have been mainly described. Accordingly, rhEpo therapy was suggested as a reliable approach for treating a broad range of pathologies, including heart and cardiovascular diseases, neurodegenerative disorders (Parkinson’s and Alzheimer’s disease), spinal cord injury, stroke, diabetic retinopathy and rare diseases (Friedreich ataxia).Unfortunately, the side effects of rhEpo are also evident. Epo is involved in regulating tumor angiogenesis (39) and probably in the survival and growth of tumor cells (25,40,41). rhEpo administration also induces serious side effects such as hypertension, polycythemia, myocardial infarction, stroke and seizures, platelet activation and increased thromboembolic risk, and immunogenicity (4246), with the most common being hypertension (47,48). A new generation of nonhematopoietic EpoR agonists drugs have hence been investigated and further developed in animals models. These compounds, namely asialoerythropoietin (asialoEpo) and carbamylated Epo (Cepo), were developed for preserving tissue-protective properties but reducing the erythropoietic activity of native Epo (49,50). These drugs will provide better outcome in ongoing clinical trials. The advantage of using nonhematopoietic Epo analogs is to avoid the stimulation of hematopoiesis and thereby the prevention of an increased hematocrit with a subsequent procoagulant status or increased blood pressure. In this regard, a new study by van Rijt et al. has shed new light on this topic (51). A new nonhematopoietic EpoR agonist analog named ARA 290 has been developed, promising cytoprotective capacities to prevent renal ischemia/reperfusion injury (51). ARA 290 is a short peptide that has shown no safety concerns in preclinical and human studies. In addition, ARA 290 has proven efficacious in cardiac disorders (52,53), neuropathic pain (54) and sarcoidosis-induced chronic neuropathic pain (55). Thus, ARA 290 is a novel nonhematopoietic EpoR agonist with promising therapeutic options in treating a wide range of pathologies and without increased risks of cardiovascular events.Overall, this new generation of EpoR agonists without the erythropoietic activity of Epo while preserving tissue-protective properties of Epo will provide better outcomes in ongoing clinical trials (49,50). Nonhematopoietic EpoR agonists represent safer and more effective surrogates for the treatment of several diseases, such as brain and peripheral nerve injury, diabetic complications, renal ischemia, rare diseases, myocardial infarction, chronic heart disease and others.  相似文献   

16.
17.

Background

Chest pain can be caused by various conditions, with life-threatening cardiac disease being of greatest concern. Prediction scores to rule out coronary artery disease have been developed for use in emergency settings. We developed and validated a simple prediction rule for use in primary care.

Methods

We conducted a cross-sectional diagnostic study in 74 primary care practices in Germany. Primary care physicians recruited all consecutive patients who presented with chest pain (n = 1249) and recorded symptoms and findings for each patient (derivation cohort). An independent expert panel reviewed follow-up data obtained at six weeks and six months on symptoms, investigations, hospital admissions and medications to determine the presence or absence of coronary artery disease. Adjusted odds ratios of relevant variables were used to develop a prediction rule. We calculated measures of diagnostic accuracy for different cut-off values for the prediction scores using data derived from another prospective primary care study (validation cohort).

Results

The prediction rule contained five determinants (age/sex, known vascular disease, patient assumes pain is of cardiac origin, pain is worse during exercise, and pain is not reproducible by palpation), with the score ranging from 0 to 5 points. The area under the curve (receiver operating characteristic curve) was 0.87 (95% confidence interval [CI] 0.83–0.91) for the derivation cohort and 0.90 (95% CI 0.87–0.93) for the validation cohort. The best overall discrimination was with a cut-off value of 3 (positive result 3–5 points; negative result ≤ 2 points), which had a sensitivity of 87.1% (95% CI 79.9%–94.2%) and a specificity of 80.8% (77.6%–83.9%).

Interpretation

The prediction rule for coronary artery disease in primary care proved to be robust in the validation cohort. It can help to rule out coronary artery disease in patients presenting with chest pain in primary care.Chest pain is common. Studies have shown a lifetime prevalence of 20% to 40% in the general population.1 Its prevalence in primary care ranges from 0.7% to 2.7% depending on inclusion criteria and country,24 with coronary artery disease being the underlying cause in about 12% of primary care patients.1,5 General practitioners are challenged to identify serious cardiac disease reliably and also protect patients from unnecessary investigations and hospital admissions. Because electrocardiography and the cardiac troponin test are of limited value in primary care,6,7 history taking and physical examination remain the main diagnostic tools.Most published studies on the diagnostic accuracy of signs and symptoms for acute coronary events have been conducted in high-prevalence settings such as hospital emergency departments.810 Predictive scores have also been developed for use in emergency departments, mainly for the diagnosis of acute coronary syndromes.1113 To what degree these apply in primary care is unknown.1416A clinical prediction score to rule out coronary artery disease in general practice has been developed.17 However, it did not perform well when validated externally. The aim of our study was to develop a simple, valid and usable prediction score based on signs and symptoms to help primary care physicians rule out coronary artery disease in patients presenting with chest pain.  相似文献   

18.

Background:

Little evidence exists on the effect of an energy-unrestricted healthy diet on metabolic syndrome. We evaluated the long-term effect of Mediterranean diets ad libitum on the incidence or reversion of metabolic syndrome.

Methods:

We performed a secondary analysis of the PREDIMED trial — a multicentre, randomized trial done between October 2003 and December 2010 that involved men and women (age 55–80 yr) at high risk for cardiovascular disease. Participants were randomly assigned to 1 of 3 dietary interventions: a Mediterranean diet supplemented with extra-virgin olive oil, a Mediterranean diet supplemented with nuts or advice on following a low-fat diet (the control group). The interventions did not include increased physical activity or weight loss as a goal. We analyzed available data from 5801 participants. We determined the effect of diet on incidence and reversion of metabolic syndrome using Cox regression analysis to calculate hazard ratios (HRs) and 95% confidence intervals (CIs).

Results:

Over 4.8 years of follow-up, metabolic syndrome developed in 960 (50.0%) of the 1919 participants who did not have the condition at baseline. The risk of developing metabolic syndrome did not differ between participants assigned to the control diet and those assigned to either of the Mediterranean diets (control v. olive oil HR 1.10, 95% CI 0.94–1.30, p = 0.231; control v. nuts HR 1.08, 95% CI 0.92–1.27, p = 0.3). Reversion occurred in 958 (28.2%) of the 3392 participants who had metabolic syndrome at baseline. Compared with the control group, participants on either Mediterranean diet were more likely to undergo reversion (control v. olive oil HR 1.35, 95% CI 1.15–1.58, p < 0.001; control v. nuts HR 1.28, 95% CI 1.08–1.51, p < 0.001). Participants in the group receiving olive oil supplementation showed significant decreases in both central obesity and high fasting glucose (p = 0.02); participants in the group supplemented with nuts showed a significant decrease in central obesity.

Interpretation:

A Mediterranean diet supplemented with either extra virgin olive oil or nuts is not associated with the onset of metabolic syndrome, but such diets are more likely to cause reversion of the condition. An energy-unrestricted Mediterranean diet may be useful in reducing the risks of central obesity and hyperglycemia in people at high risk of cardiovascular disease. Trial registration: ClinicalTrials.gov, no. ISRCTN35739639.Metabolic syndrome is a cluster of 3 or more related cardiometabolic risk factors: central obesity (determined by waist circumference), hypertension, hypertriglyceridemia, low plasma high-density lipoprotein (HDL) cholesterol levels and hyperglycemia. Having the syndrome increases a person’s risk for type 2 diabetes and cardiovascular disease.1,2 In addition, the condition is associated with increased morbidity and all-cause mortality.1,35 The worldwide prevalence of metabolic syndrome in adults approaches 25%68 and increases with age,7 especially among women,8,9 making it an important public health issue.Several studies have shown that lifestyle modifications,10 such as increased physical activity,11 adherence to a healthy diet12,13 or weight loss,1416 are associated with reversion of the metabolic syndrome and its components. However, little information exists as to whether changes in the overall dietary pattern without weight loss might also be effective in preventing and managing the condition.The Mediterranean diet is recognized as one of the healthiest dietary patterns. It has shown benefits in patients with cardiovascular disease17,18 and in the prevention and treatment of related conditions, such as diabetes,1921 hypertension22,23 and metabolic syndrome.24Several cross-sectional2529 and prospective3032 epidemiologic studies have suggested an inverse association between adherence to the Mediterranean diet and the prevalence or incidence of metabolic syndrome. Evidence from clinical trials has shown that an energy-restricted Mediterranean diet33 or adopting a Mediterranean diet after weight loss34 has a beneficial effect on metabolic syndrome. However, these studies did not determine whether the effect could be attributed to the weight loss or to the diets themselves.Seminal data from the PREDIMED (PREvención con DIeta MEDiterránea) study suggested that adherence to a Mediterranean diet supplemented with nuts reversed metabolic syndrome more so than advice to follow a low-fat diet.35 However, the report was based on data from only 1224 participants followed for 1 year. We have analyzed the data from the final PREDIMED cohort after a median follow-up of 4.8 years to determine the long-term effects of a Mediterranean diet on metabolic syndrome.  相似文献   

19.

Background

Fractures have largely been assessed by their impact on quality of life or health care costs. We conducted this study to evaluate the relation between fractures and mortality.

Methods

A total of 7753 randomly selected people (2187 men and 5566 women) aged 50 years and older from across Canada participated in a 5-year observational cohort study. Incident fractures were identified on the basis of validated self-report and were classified by type (vertebral, pelvic, forearm or wrist, rib, hip and “other”). We subdivided fracture groups by the year in which the fracture occurred during follow-up; those occurring in the fourth and fifth years were grouped together. We examined the relation between the time of the incident fracture and death.

Results

Compared with participants who had no fracture during follow-up, those who had a vertebral fracture in the second year were at increased risk of death (adjusted hazard ratio [HR] 2.7, 95% confidence interval [CI] 1.1–6.6); also at risk were those who had a hip fracture during the first year (adjusted HR 3.2, 95% CI 1.4–7.4). Among women, the risk of death was increased for those with a vertebral fracture during the first year (adjusted HR 3.7, 95% CI 1.1–12.8) or the second year of follow-up (adjusted HR 3.2, 95% CI 1.2–8.1). The risk of death was also increased among women with hip fracture during the first year of follow-up (adjusted HR 3.0, 95% CI 1.0–8.7).

Interpretation

Vertebral and hip fractures are associated with an increased risk of death. Interventions that reduce the incidence of these fractures need to be implemented to improve survival.Osteoporosis-related fractures are a major health concern, affecting a growing number of individuals worldwide. The burden of fracture has largely been assessed by the impact on health-related quality of life and health care costs.1,2 Fractures can also be associated with death. However, trials that have examined the relation between fractures and mortality have had limitations that may influence their results and the generalizability of the studies, including small samples,3,4 the examination of only 1 type of fracture,410 the inclusion of only women,8,11 the enrolment of participants from specific areas (i.e., hospitals or certain geographic regions),3,4,7,8,10,12 the nonrandom selection of participants311 and the lack of statistical adjustment for confounding factors that may influence mortality.3,57,12We evaluated the relation between incident fractures and mortality over a 5-year period in a cohort of men and women 50 years of age and older. In addition, we examined whether other characteristics of participants were risk factors for death.  相似文献   

20.
Background:Head injuries have been associated with subsequent suicide among military personnel, but outcomes after a concussion in the community are uncertain. We assessed the long-term risk of suicide after concussions occurring on weekends or weekdays in the community.Methods:We performed a longitudinal cohort analysis of adults with diagnosis of a concussion in Ontario, Canada, from Apr. 1, 1992, to Mar. 31, 2012 (a 20-yr period), excluding severe cases that resulted in hospital admission. The primary outcome was the long-term risk of suicide after a weekend or weekday concussion.Results:We identified 235 110 patients with a concussion. Their mean age was 41 years, 52% were men, and most (86%) lived in an urban location. A total of 667 subsequent suicides occurred over a median follow-up of 9.3 years, equivalent to 31 deaths per 100 000 patients annually or 3 times the population norm. Weekend concussions were associated with a one-third further increased risk of suicide compared with weekday concussions (relative risk 1.36, 95% confidence interval 1.14–1.64). The increased risk applied regardless of patients’ demographic characteristics, was independent of past psychiatric conditions, became accentuated with time and exceeded the risk among military personnel. Half of these patients had visited a physician in the last week of life.Interpretation:Adults with a diagnosis of concussion had an increased long-term risk of suicide, particularly after concussions on weekends. Greater attention to the long-term care of patients after a concussion in the community might save lives because deaths from suicide can be prevented.Suicide is a leading cause of death in both military and community settings.1 During 2010, 3951 suicide deaths occurred in Canada2 and 38 364 in the United States.3 The frequency of attempted suicide is about 25 times higher, and the financial costs in the US equate to about US$40 billion annually.4 The losses from suicide in Canada are comparable to those in other countries when adjusted for population size.5 Suicide deaths can be devastating to surviving family and friends.6 Suicide in the community is almost always related to a psychiatric illness (e.g., depression, substance abuse), whereas suicide in the military is sometimes linked to a concussion from combat injury.710Concussion is the most common brain injury in young adults and is defined as a transient disturbance of mental function caused by acute trauma.11 About 4 million concussion cases occur in the US each year, equivalent to a rate of about 1 per 1000 adults annually;12 direct Canadian data are not available. The majority lead to self-limited symptoms, and only a small proportion have a protracted course.13 However, the frequency of depression after concussion can be high,14,15 and traumatic brain injury in the military has been associated with subsequent suicide.8,16 Severe head trauma resulting in admission to hospital has also been associated with an increased risk of suicide, whereas mild concussion in ambulatory adults is an uncertain risk factor.1720The aim of this study was to determine whether concussion was associated with an increased long-term risk of suicide and, if so, whether the day of the concussion (weekend v. weekday) could be used to identify patients at further increased risk. The severity and mechanism of injury may differ by day of the week because recreational injuries are more common on weekends and occupational injuries are more common on weekdays.2127 The risk of a second concussion, use of protective safeguards, propensity to seek care, subsequent oversight, sense of responsibility and other nuances may also differ for concussions acquired from weekend recreation rather than weekday work.2831 Medical care on weekends may also be limited because of shortfalls in staffing.32  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号