首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

Background

The prevalence of frailty increases with age in older adults, but frailty is largely unreported for younger adults, where its associated risk is less clear. Furthermore, less is known about how frailty changes over time among younger adults. We estimated the prevalence and outcomes of frailty, in relation to accumulation of deficits, across the adult lifespan.

Methods

We analyzed data for community-dwelling respondents (age 15–102 years at baseline) to the longitudinal component of the National Population Health Survey, with seven two-year cycles, beginning 1994–1995. The outcomes were death, use of health services and change in health status, measured in terms of a Frailty Index constructed from 42 self-reported health variables.

Results

The sample consisted of 14 713 respondents (54.2% women). Vital status was known for more than 99% of the respondents. The prevalence of frailty increased with age, from 2.0% (95% confidence interval [CI] 1.7%–2.4%) among those younger than 30 years to 22.4% (95% CI 19.0%–25.8%) for those older than age 65, including 43.7% (95% CI 37.1%–50.8%) for those 85 and older. At all ages, the 160-month mortality rate was lower among relatively fit people than among those who were frail (e.g., 2% v. 16% at age 40; 42% v. 83% at age 75 or older). These relatively fit people tended to remain relatively fit over time. Relative to all other groups, a greater proportion of the most frail people used health services at baseline (28.3%, 95% CI 21.5%–35.5%) and at each follow-up cycle (26.7%, 95% CI 15.4%–28.0%).

Interpretation

Deficits accumulated with age across the adult spectrum. At all ages, a higher Frailty Index was associated with higher mortality and greater use of health care services. At younger ages, recovery to the relatively fittest state was common, but the chance of complete recovery declined with age.On average, health declines with age. Even so, at any given age the health status across a group of people varies. Variability in health status and in the risk for adverse outcomes for people of the same age is referred to as “frailty,” which typically has been studied among older adults.1,2 Although frailty can be operationalized in different ways, in general, people who report having no health problems are more likely to be fit than people who report having many problems. Unsurprisingly, the chance of adverse outcomes — death, admission to a long-term care institution or to hospital, or worsening of health status — increases with the number of problems that the individual has.3,4The antecedents of frailty appear to arise some time before old age,59 although how frailty emerges as people age, whether it carries the same risk at all ages and the extent to which it fluctuates are less clear.9,10 In the study reported here, we evaluated changes in relative fitness and frailty across the adult lifespan. Our objectives were to investigate the effect of age on the prevalence of relative fitness and frailty, the characteristics of people who were relatively fit in comparison with those who were frail across the adult lifespan, the effects of fitness and frailty on mortality in relation to age and sex, and the characteristics of people who maintained the highest levels of fitness across a decade relative to those who at any point reported any decline.  相似文献   

2.

Background:

Greater awareness of sleep-disordered breathing and rising obesity rates have fueled demand for sleep studies. Sleep testing using level 3 portable devices may expedite diagnosis and reduce the costs associated with level 1 in-laboratory polysomnography. We sought to assess the diagnostic accuracy of level 3 testing compared with level 1 testing and to identify the appropriate patient population for each test.

Methods:

We conducted a systematic review and meta-analysis of comparative studies of level 3 versus level 1 sleep tests in adults with suspected sleep-disordered breathing. We searched 3 research databases and grey literature sources for studies that reported on diagnostic accuracy parameters or disease management after diagnosis. Two reviewers screened the search results, selected potentially relevant studies and extracted data. We used a bivariate mixed-effects binary regression model to estimate summary diagnostic accuracy parameters.

Results:

We included 59 studies involving a total of 5026 evaluable patients (mostly patients suspected of having obstructive sleep apnea). Of these, 19 studies were included in the meta-analysis. The estimated area under the receiver operating characteristics curve was high, ranging between 0.85 and 0.99 across different levels of disease severity. Summary sensitivity ranged between 0.79 and 0.97, and summary specificity ranged between 0.60 and 0.93 across different apnea–hypopnea cut-offs. We saw no significant difference in the clinical management parameters between patients who underwent either test to receive their diagnosis.

Interpretation:

Level 3 portable devices showed good diagnostic performance compared with level 1 sleep tests in adult patients with a high pretest probability of moderate to severe obstructive sleep apnea and no unstable comorbidities. For patients suspected of having other types of sleep-disordered breathing or sleep disorders not related to breathing, level 1 testing remains the reference standard.Undiagnosed sleep-disordered breathing places a substantial burden on patients, families, health care systems and society.1 Sleep fragmentation and recurrent hypoxemia cause daytime sleepiness and impaired concentration, which increase the risk of motor vehicle collisions and occupational accidents.27 In addition, sleep-disordered breathing is associated with hypertension, stroke, cardiovascular disease, obesity and type 2 diabetes,812 all of which involve greater use of health care resources.1317Obstructive sleep apnea is the most common type of sleep-disordered breathing. Narrowing of the upper airway during inspiration results in episodes of apnea (breathing cessation for at least 10 seconds), hypopnea (reduced airflow), oxygen desaturation and arousal from sleep due to respiratory effort.18 Clinical signs and symptoms include snoring, reports of nocturnal apnea, gasping or choking witnessed by a partner, daytime sleepiness, morning headaches and inability to concentrate. Patients with obesity or cardiovascular disease are at increased risk.19The severity of obstructive sleep apnea is usually graded using the apnea–hypopnea index (the mean number of apneas and hypopneas per hour of sleep) as follows: mild (5–14), moderate (15–29) and severe (≥ 30).18,20Other, less common types of sleep-disordered breathing include upper airway resistance syndrome, obesity hyperventilation syndrome, central sleep apnea, and nocturnal hypoventilation/hypoxemia secondary to cardiopulmonary or neuromuscular disease. It is not uncommon for patients to have more than 1 type of sleep-disordered breathing.Estimates of the prevalence of sleep-disordered breathing vary depending on the population (e.g., by sex, age and comorbidities).21 According to the Wisconsin Sleep Cohort Study, values in American adults (aged 30–60 yr) are 24% for men and 9% for women.1 A Canadian survey found a self-reported prevalence of sleep apnea of 3% among adults more than 18 years of age, and 5% among those more than 45 years of age.22 As the population ages and rates of obesity increase, the prevalence of sleep-disordered breathing is climbing.1,19,23,24 Given its clinical implications, accurate diagnosis and treatment of the condition are critical.Level 1 sleep testing, or polysomnography, requires an overnight stay in a sleep laboratory with a technician in attendance. It captures a minimum of 7 channels of data (but typically ≥ 16), including respiratory, cardiovascular and neurologic parameters, to produce a comprehensive picture of sleep architecture. Level 1 is considered the reference standard for diagnosing all types of sleep-disordered breathing and sleep disorders.19,2527 However, limited facilities and the growing demand for sleep studies have resulted in long wait times.28 Level 2 sleep testing uses level 1 equipment, but is performed without a technician in attendance.Level 3 testing uses portable monitors that allow sleep studies to be done at the patient’s home or elsewhere. This option was introduced as a more accessible and less expensive alternative to in-laboratory polysomnography. Level 3 devices record at least 3 channels of data (e.g., oximetry, airflow, respiratory effort). Unlike level 1, level 3 testing cannot measure the duration of sleep, the number of arousals or sleep stages, nor can it detect nonrespiratory sleep disorders.27,29 Level 4 devices are also portable, but they capture less data — usually only 1 or 2 channels.27,30We conducted a systematic review and meta-analysis to compare the diagnostic accuracy of the widely used level 3 portable monitors to in-laboratory polysomnography, and to determine the subpopulations of patients whose conditions might be most appropriately diagnosed with each test.  相似文献   

3.
Background:Readmissions after hospital discharge are common and costly, but prediction models are poor at identifying patients at high risk of readmission. We evaluated the impact of frailty on readmission or death within 30 days after discharge from general internal medicine wards.Methods:We prospectively enrolled patients discharged from 7 medical wards at 2 teaching hospitals in Edmonton. Frailty was defined by means of the previously validated Clinical Frailty Scale. The primary outcome was the composite of readmission or death within 30 days after discharge.Results:Of the 495 patients included in the study, 162 (33%) met the definition of frailty: 91 (18%) had mild, 60 (12%) had moderate, and 11 (2%) had severe frailty. Frail patients were older, had more comorbidities, lower quality of life, and higher LACE scores at discharge than those who were not frail. The composite of 30-day readmission or death was higher among frail than among nonfrail patients (39 [24.1%] v. 46 [13.8%]). Although frailty added additional prognostic information to predictive models that included age, sex and LACE score, only moderate to severe frailty (31.0% event rate) was an independent risk factor for readmission or death (adjusted odds ratio 2.19, 95% confidence interval 1.12–4.24).Interpretation:Frailty was common and associated with a substantially increased risk of early readmission or death after discharge from medical wards. The Clinical Frailty Scale could be useful in identifying high-risk patients being discharged from general internal medicine wards.Readmissions within 30 days after hospital discharge are common and costly occurrences. Although many studies have attempted to identify patients at highest risk of readmission, neither experienced clinicians nor experienced researchers using rigorously developed administrative data-rich algorithms can accurately predict which patients will not successfully transition back into the community.16 This suggests that currently unrecognized factors likely play a major role in readmission risk. Identification of these factors would be important for future initiatives to reduce readmission rates by targeting resources to those at highest risk.Frailty is a frequently underdiagnosed condition, with prevalence estimates ranging from 27% to 80% among inpatients79 and from 4% to 59% among older adults living in the community,10 depending on the frailty measure used and the population evaluated. Frailty is a multidimensional syndrome of decreased reserve and resistance to stressors leading to increased vulnerability to adverse outcomes.1114 The 2 models of frailty most commonly used in the literature are the phenotype model (e.g., the approach proposed by Fried and colleagues,15 which is based on 5 objective variables assessed at one point in time that do not include psychosocial and cognitive variables) and the cumulative deficit model (e.g., the Clinical Frailty Index, which is based on a mix of more than 30 variables capturing function in many domains over time).1618Although the gold standard for frailty assessment is a comprehensive geriatric assessment by a multidisciplinary team, both the phenotype and cumulative deficit models appear reasonably accurate for identifying frailty. However, both are somewhat cumbersome for routine use at the bedside.12 For these reasons, the Clinical Frailty Scale was developed and relies on clinical judgment based on history taking and clinical examination. The Clinical Frailty Scale is easy to administer at the bedside; has been used by physicians, allied health professionals and research assistants; does not require any special equipment; is highly correlated with the Fried frailty index (r = 0.8);17 and appears to be valid, reliable and reproducible.19 Some risk-prediction models, such as the LACE Index, have tried to incorporate frailty, but they did not find it to be a significant independent variable, possibly owing to the frailty measure used. A systematic review of 30 risk-prediction models for hospital readmission found that only 2 included functional status.4We conducted a study to evaluate whether frailty identified using the Clinical Frailty Scale is an independent predictor of death or readmission within 30 days after discharge from hospital.  相似文献   

4.

Background:

Screening for methicillin-resistant Staphylococcus aureus (MRSA) is intended to reduce nosocomial spread by identifying patients colonized by MRSA. Given the widespread use of this screening, we evaluated its potential clinical utility in predicting the resistance of clinical isolates of S. aureus.

Methods:

We conducted a 2-year retrospective cohort study that included patients with documented clinical infection with S. aureus and prior screening for MRSA. We determined test characteristics, including sensitivity and specificity, of screening for predicting the resistance of subsequent S. aureus isolates.

Results:

Of 510 patients included in the study, 53 (10%) had positive results from MRSA screening, and 79 (15%) of infecting isolates were resistant to methicillin. Screening for MRSA predicted methicillin resistance of the infecting isolate with 99% (95% confidence interval [CI] 98%–100%) specificity and 63% (95% CI 52%–74%) sensitivity. When screening swabs were obtained within 48 hours before isolate collection, sensitivity increased to 91% (95% CI 71%–99%) and specificity was 100% (95% CI 97%–100%), yielding a negative likelihood ratio of 0.09 (95% CI 0.01–0.3) and a negative predictive value of 98% (95% CI 95%–100%). The time between swab and isolate collection was a significant predictor of concordance of methicillin resistance in swabs and isolates (odds ratio 6.6, 95% CI 1.6–28.2).

Interpretation:

A positive result from MRSA screening predicted methicillin resistance in a culture-positive clinical infection with S. aureus. Negative results on MRSA screening were most useful for excluding methicillin resistance of a subsequent infection with S. aureus when the screening swab was obtained within 48 hours before collection of the clinical isolate.Antimicrobial resistance is a global problem. The prevalence of resistant bacteria, including methicillin-resistant Staphylococcus aureus (MRSA), has reached high levels in many countries.13 Methicillin resistance in S. aureus is associated with excess mortality, hospital stays and health care costs,3,4 possibly owing to increased virulence or less effective treatments for MRSA compared with methicillin-sensitive S. aureus (MSSA).5The initial selection of appropriate empirical antibiotic treatment affects mortality, morbidity and potential health care expenditures.68 The optimal choice of antibiotics in S. aureus infections is important for 3 major reasons: β-lactam antibiotics have shown improved efficacy over vancomycin and are the ideal treatment for susceptible strains of S. aureus;6 β-lactam antibiotics are ineffective against MRSA, and so vancomycin or other newer agents must be used empirically when MRSA is suspected; and unnecessary use of broad-spectrum antibiotics (e.g., vancomycin) can lead to the development of further antimicrobial resistance.9 It is therefore necessary to make informed decisions regarding selection of empirical antibiotics.1013 Consideration of a patient’s previous colonization status is important, because colonization predates most hospital and community-acquired infections.10,14Universal or targeted surveillance for MRSA has been implemented widely as a means of limiting transmission of this antibiotic-resistant pathogen.15,16 Although results of MRSA screening are not intended to guide empirical treatment, they may offer an additional benefit among patients in whom clinical infection with S. aureus develops.Studies that examined the effects of MRSA carriage on the subsequent likelihood of infection allude to the potential diagnostic benefit of prior screening for MRSA.17,18 Colonization by MRSA at the time of hospital admission is associated with a 13-fold increased risk of subsequent MRSA infection.17,18 Moreover, studies that examined nasal carriage of S. aureus after documented S. aureus bacteremia have shown remarkable concordance between the genotypes of paired colonizing and invasive strains (82%–94%).19,20 The purpose of our study was to identify the usefulness of prior screening for MRSA for predicting methicillin resistance in culture-positive S. aureus infections.  相似文献   

5.
6.
7.

Background

The pathogenesis of appendicitis is unclear. We evaluated whether exposure to air pollution was associated with an increased incidence of appendicitis.

Methods

We identified 5191 adults who had been admitted to hospital with appendicitis between Apr. 1, 1999, and Dec. 31, 2006. The air pollutants studied were ozone, nitrogen dioxide, sulfur dioxide, carbon monoxide, and suspended particulate matter of less than 10 μ and less than 2.5 μ in diameter. We estimated the odds of appendicitis relative to short-term increases in concentrations of selected pollutants, alone and in combination, after controlling for temperature and relative humidity as well as the effects of age, sex and season.

Results

An increase in the interquartile range of the 5-day average of ozone was associated with appendicitis (odds ratio [OR] 1.14, 95% confidence interval [CI] 1.03–1.25). In summer (July–August), the effects were most pronounced for ozone (OR 1.32, 95% CI 1.10–1.57), sulfur dioxide (OR 1.30, 95% CI 1.03–1.63), nitrogen dioxide (OR 1.76, 95% CI 1.20–2.58), carbon monoxide (OR 1.35, 95% CI 1.01–1.80) and particulate matter less than 10 μ in diameter (OR 1.20, 95% CI 1.05–1.38). We observed a significant effect of the air pollutants in the summer months among men but not among women (e.g., OR for increase in the 5-day average of nitrogen dioxide 2.05, 95% CI 1.21–3.47, among men and 1.48, 95% CI 0.85–2.59, among women). The double-pollutant model of exposure to ozone and nitrogen dioxide in the summer months was associated with attenuation of the effects of ozone (OR 1.22, 95% CI 1.01–1.48) and nitrogen dioxide (OR 1.48, 95% CI 0.97–2.24).

Interpretation

Our findings suggest that some cases of appendicitis may be triggered by short-term exposure to air pollution. If these findings are confirmed, measures to improve air quality may help to decrease rates of appendicitis.Appendicitis was introduced into the medical vernacular in 1886.1 Since then, the prevailing theory of its pathogenesis implicated an obstruction of the appendiceal orifice by a fecalith or lymphoid hyperplasia.2 However, this notion does not completely account for variations in incidence observed by age,3,4 sex,3,4 ethnic background,3,4 family history,5 temporal–spatial clustering6 and seasonality,3,4 nor does it completely explain the trends in incidence of appendicitis in developed and developing nations.3,7,8The incidence of appendicitis increased dramatically in industrialized nations in the 19th century and in the early part of the 20th century.1 Without explanation, it decreased in the middle and latter part of the 20th century.3 The decrease coincided with legislation to improve air quality. For example, after the United States Clean Air Act was passed in 1970,9 the incidence of appendicitis decreased by 14.6% from 1970 to 1984.3 Likewise, a 36% drop in incidence was reported in the United Kingdom between 1975 and 199410 after legislation was passed in 1956 and 1968 to improve air quality and in the 1970s to control industrial sources of air pollution. Furthermore, appendicitis is less common in developing nations; however, as these countries become more industrialized, the incidence of appendicitis has been increasing.7Air pollution is known to be a risk factor for multiple conditions, to exacerbate disease states and to increase all-cause mortality.11 It has a direct effect on pulmonary diseases such as asthma11 and on nonpulmonary diseases including myocardial infarction, stroke and cancer.1113 Inflammation induced by exposure to air pollution contributes to some adverse health effects.1417 Similar to the effects of air pollution, a proinflammatory response has been associated with appendicitis.1820We conducted a case–crossover study involving a population-based cohort of patients admitted to hospital with appendicitis to determine whether short-term increases in concentrations of selected air pollutants were associated with hospital admission because of appendicitis.  相似文献   

8.

Background:

Polymyalgia rheumatica is one of the most common inflammatory rheumatologic conditions in older adults. Other inflammatory rheumatologic disorders are associated with an excess risk of vascular disease. We investigated whether polymyalgia rheumatica is associated with an increased risk of vascular events.

Methods:

We used the General Practice Research Database to identify patients with a diagnosis of incident polymyalgia rheumatica between Jan. 1, 1987, and Dec. 31, 1999. Patients were matched by age, sex and practice with up to 5 patients without polymyalgia rheumatica. Patients were followed until their first vascular event (cardiovascular, cerebrovascular, peripheral vascular) or the end of available records (May 2011). All participants were free of vascular disease before the diagnosis of polymyalgia rheumatica (or matched date). We used Cox regression models to compare time to first vascular event in patients with and without polymyalgia rheumatica.

Results:

A total of 3249 patients with polymyalgia rheumatica and 12 735 patients without were included in the final sample. Over a median follow-up period of 7.8 (interquartile range 3.3–12.4) years, the rate of vascular events was higher among patients with polymyalgia rheumatica than among those without (36.1 v. 12.2 per 1000 person-years; adjusted hazard ratio 2.6, 95% confidence interval 2.4–2.9). The increased risk of a vascular event was similar for each vascular disease end point. The magnitude of risk was higher in early disease and in patients younger than 60 years at diagnosis.

Interpretation:

Patients with polymyalgia rheumatica have an increased risk of vascular events. This risk is greatest in the youngest age groups. As with other forms of inflammatory arthritis, patients with polymyalgia rheumatica should have their vascular risk factors identified and actively managed to reduce this excess risk.Inflammatory rheumatologic disorders such as rheumatoid arthritis,1,2 systemic lupus erythematosus,2,3 gout,4 psoriatic arthritis2,5 and ankylosing spondylitis2,6 are associated with an increased risk of vascular disease, especially cardiovascular disease, leading to substantial morbidity and premature death.26 Recognition of this excess vascular risk has led to management guidelines advocating screening for and management of vascular risk factors.79Polymyalgia rheumatica is one of the most common inflammatory rheumatologic conditions in older adults,10 with a lifetime risk of 2.4% for women and 1.7% for men.11 To date, evidence regarding the risk of vascular disease in patients with polymyalgia rheumatica is unclear. There are a number of biologically plausible mechanisms between polymyalgia rheumatica and vascular disease. These include the inflammatory burden of the disease,12,13 the association of the disease with giant cell arteritis (causing an inflammatory vasculopathy, which may lead to subclinical arteritis, stenosis or aneurysms),14 and the adverse effects of long-term corticosteroid treatment (e.g., diabetes, hypertension and dyslipidemia).15,16 Paradoxically, however, use of corticosteroids in patients with polymyalgia rheumatica may actually decrease vascular risk by controlling inflammation.17 A recent systematic review concluded that although some evidence exists to support an association between vascular disease and polymyalgia rheumatica,18 the existing literature presents conflicting results, with some studies reporting an excess risk of vascular disease19,20 and vascular death,21,22 and others reporting no association.2326 Most current studies are limited by poor methodologic quality and small samples, and are based on secondary care cohorts, who may have more severe disease, yet most patients with polymyalgia rheumatica receive treatment exclusively in primary care.27The General Practice Research Database (GPRD), based in the United Kingdom, is a large electronic system for primary care records. It has been used as a data source for previous studies,28 including studies on the association of inflammatory conditions with vascular disease29 and on the epidemiology of polymyalgia rheumatica in the UK.30 The aim of the current study was to examine the association between polymyalgia rheumatica and vascular disease in a primary care population.  相似文献   

9.

Background:

Persistent postoperative pain continues to be an underrecognized complication. We examined the prevalence of and risk factors for this type of pain after cardiac surgery.

Methods:

We enrolled patients scheduled for coronary artery bypass grafting or valve replacement, or both, from Feb. 8, 2005, to Sept. 1, 2009. Validated measures were used to assess (a) preoperative anxiety and depression, tendency to catastrophize in the face of pain, health-related quality of life and presence of persistent pain; (b) pain intensity and interference in the first postoperative week; and (c) presence and intensity of persistent postoperative pain at 3, 6, 12 and 24 months after surgery. The primary outcome was the presence of persistent postoperative pain during 24 months of follow-up.

Results:

A total of 1247 patients completed the preoperative assessment. Follow-up retention rates at 3 and 24 months were 84% and 78%, respectively. The prevalence of persistent postoperative pain decreased significantly over time, from 40.1% at 3 months to 22.1% at 6 months, 16.5% at 12 months and 9.5% at 24 months; the pain was rated as moderate to severe in 3.6% at 24 months. Acute postoperative pain predicted both the presence and severity of persistent postoperative pain. The more intense the pain during the first week after surgery and the more it interfered with functioning, the more likely the patients were to report persistent postoperative pain. Pre-existing persistent pain and increased preoperative anxiety also predicted the presence of persistent postoperative pain.

Interpretation:

Persistent postoperative pain of nonanginal origin after cardiac surgery affected a substantial proportion of the study population. Future research is needed to determine whether interventions to modify certain risk factors, such as preoperative anxiety and the severity of pain before and immediately after surgery, may help to minimize or prevent persistent postoperative pain.Postoperative pain that persists beyond the normal time for tissue healing (> 3 mo) is increasingly recognized as an important complication after various types of surgery and can have serious consequences on patients’ daily living.13 Cardiac surgeries, such as coronary artery bypass grafting (CABG) and valve replacement, rank among the most frequently performed interventions worldwide.4 They aim to improve survival and quality of life by reducing symptoms, including anginal pain. However, persistent postoperative pain of nonanginal origin has been reported in 7% to 60% of patients following these surgeries.523 Such variability is common in other types of major surgery and is due mainly to differences in the definition of persistent postoperative pain, study design, data collection methods and duration of follow-up.13,24Few prospective cohort studies have examined the exact time course of persistent postoperative pain after cardiac surgery, and follow-up has always been limited to a year or less.9,14,25 Factors that put patients at risk of this type of problem are poorly understood.26 Studies have reported inconsistent results regarding the contribution of age, sex, body mass index, preoperative angina, surgical technique, grafting site, postoperative complications or level of opioid consumption after surgery.57,9,13,14,1619,2123,25,27 Only 1 study investigated the role of chronic nonanginal pain before surgery as a contributing factor;21 5 others prospectively assessed the association between persistent postoperative pain and acute pain intensity in the first postoperative week but reported conflicting results.13,14,21,22,25 All of the above studies were carried out in a single hospital and included relatively small samples. None of the studies examined the contribution of psychological factors such as levels of anxiety and depression before cardiac surgery, although these factors have been shown to influence acute or persistent postoperative pain in other types of surgery.1,24,28,29We conducted a prospective multicentre cohort study (the CARD-PAIN study) to determine the prevalence of persistent postoperative pain of nonanginal origin up to 24 months after cardiac surgery and to identify risk factors for the presence and severity of the condition.  相似文献   

10.
Background:In the context of the Canadian mission in Afghanistan, substantial media attention has been placed on mental health and lack of access to treatment among Canadian Forces personnel. We compared trends in the prevalence of suicidal behaviour and the use of mental health services between Canadian military personnel and the general population from 2002 to 2012/13.Methods:We obtained data for respondents aged 18–60 years who participated in 4 nationally representative surveys by Statistics Canada designed to permit comparisons between populations and trends over time. Surveys of the general population were conducted in 2002 (n = 25 643) and 2012 (n = 15 981); those of military personnel were conducted in 2002 (n = 5153) and 2013 (n = 6700). We assessed the lifetime and past-year prevalence of suicidal ideation, plans and attempts, as well as use of mental health services.Results:In 2012/13, but not in 2002, military personnel had significantly higher odds of both lifetime and past-year suicidal ideation than the civilian population (lifetime: adjusted odds ratio [OR] 1.32, 95% confidence interval [CI] 1.17–1.50; past year: adjusted OR 1.34, 95% CI 1.09–1.66). The same was true for suicidal plans (lifetime: adjusted OR 1.64, 95% CI 1.35–1.99; past year: adjusted OR 1.66, 95% CI 1.18–2.33). Among respondents who reported past-year suicidal ideation, those in the military had a significantly higher past-year utilization rate of mental health services than those in the civilian population in both 2002 (adjusted OR 2.02, 95% CI 1.31–3.13) and 2012/13 (adjusted OR 3.14, 95% CI 1.86–5.28).Interpretation:Canadian Forces personnel had a higher prevalence of suicidal ideation and plans in 2012/13 and a higher use of mental health services in 2002 and 2012/13 than the civilian population.Suicide is a leading cause of death around the world in military and civilian populations. 13 There has been increased attention paid to suicidal behaviour in Canada, and a number of initiatives are being put in place to prevent suicide through better recognition and treatment of mental disorders.4 Examples of major Canadian initiatives include creation of a national Mental Health Commission of Canada,5 development of a federal framework for suicide prevention,6 large investments in military and veteran mental health services, and targeted efforts to formulate comprehensive suicide prevention strategies among military and veteran populations.4,7 Despite these initiatives, the prevalence of suicide in Canada has not changed appreciably in recent years.8,9A recent report on suicides in the Canadian Forces did not find an overall increase in the prevalence of suicide between 1995 and 2014.10 However, the prevalence increased substantially over that time in the subgroup of male army personnel in the Regular Force.10 In the United States, the army has observed steady increases in the prevalence of suicide attempts and completed suicide by soldiers since 2004, whereas the prevalence of suicide has remained unchanged in the general population.3,11,12 Findings from the US are not generalizable to the Canadian military because of differences in recruitment, deployment policies and health care systems.13Suicidal ideation, plans and attempts are strong risk factors for death by suicide.14 A history of suicide attempt is the strongest predictor of future attempts.15 Suicidal ideation is also an important target for intervention because previous work has shown a rapid transition from first-onset suicidal ideation to plans and attempts within the same year.16 It remains unknown whether nonfatal suicidal behaviour in military and civilian populations in Canada has changed over time.Another area of major public health concern is that most people with suicidal behaviour do not receive mental health services. In nationally representative civilian samples in Canada and 21 other countries, most respondents with suicidal behaviours (60%) did not receive mental health services. 17,18 The use of such services among Canadian military personnel with suicidal behaviours remains unknown. The media has recently been critical of the Canadian Armed Forces and Veterans Affairs Canada about insufficient services available to military personnel and veterans.19We compared trends in the prevalence of suicidal behaviours and help-seeking between Canadian civilian and military populations over a 10-year period from 2002 to 2012/13 using data from 4 nationally representative surveys.  相似文献   

11.

Background:

Although diacetylmorphine has been proven to be more effective than methadone maintenance treatment for opioid dependence, its direct costs are higher. We compared the cost-effectiveness of diacetylmorphine and methadone maintenance treatment for chronic opioid dependence refractory to treatment.

Methods:

We constructed a semi-Markov cohort model using data from the North American Opiate Medication Initiative trial, supplemented with administrative data for the province of British Columbia and other published data, to capture the chronic, recurrent nature of opioid dependence. We calculated incremental cost-effectiveness ratios to compare diacetylmorphine and methadone over 1-, 5-, 10-year and lifetime horizons.

Results:

Diacetylmorphine was found to be a dominant strategy over methadone maintenance treatment in each of the time horizons. Over a lifetime horizon, our model showed that people receiving methadone gained 7.46 discounted quality-adjusted life-years (QALYs) on average (95% credibility interval [CI] 6.91–8.01) and generated a societal cost of $1.14 million (95% CI $736 800–$1.78 million). Those who received diacetylmorphine gained 7.92 discounted QALYs on average (95% CI 7.32–8.53) and generated a societal cost of $1.10 million (95% CI $724 100–$1.71 million). Cost savings in the diacetylmorphine cohort were realized primarily because of reductions in the costs related to criminal activity. Probabilistic sensitivity analysis showed that the probability of diacetylmorphine being cost-effective at a willingness-to-pay threshold of $0 per QALY gained was 76%; the probability was 95% at a threshold of $100 000 per QALY gained. Results were confirmed over a range of sensitivity analyses.

Interpretation:

Using mathematical modelling to extrapolate results from the North American Opiate Medication Initiative, we found that diacetylmorphine may be more effective and less costly than methadone among people with chronic opioid dependence refractory to treatment.Opioid substitution with methadone is the most common treatment of opioid dependence.13 Participation in a methadone maintenance treatment program has been associated with decreases in illicit drug use,4 criminality5 and mortality.6,7 However, longitudinal studies have shown that most people who receive opioid substitution treatment are unable to abstain from illicit drug use for sustained periods, either switching from treatment to regular opioid use or continuing to use opioids while in treatment.813 An estimated 15%–25% of the most marginalized methadone clients do not benefit from treatment in terms of sustained abstention from the use of illicit opioids.14The North American Opiate Medication Initiative was a randomized controlled trial that compared supervised, medically prescribed injectable diacetylmorphine and optimized methadone maintenance treatment in people with long-standing opioid dependence and multiple failed treatment attempts with methadone or other forms of treatment.15 The trial was conducted in two Canadian cities (Vancouver, British Columbia; and Montréal, Quebec). Both treatment protocols included a comprehensive range of psychosocial services (e.g., addiction counselling, relapse prevention, case management, and individual and group interventions) and primary care services (e.g., testing for blood-borne diseases, provision of HIV treatment, and treatment of acute and chronic physical and mental health complications of substance use) in keeping with Health Canada best practices.16 The results of the trial confirmed findings of prior studies showing diacetylmorphine to be more effective than methadone maintenance treatment in retaining opioid-dependent patients in treatment15,1720 and improving health and social functioning.19,21,22 Diacetylmorphine treatment has been proposed to reach a specific population of people with opioid dependence refractory to treatment who are at high risk of adverse health consequences and engagement in criminal activities to acquire the illicit drugs.For guiding policy-makers, the North American Opiate Medication Initiative alone does not address all the important considerations for decision-making. In addition to political challenges associated with the therapy,23 there remains concern over the direct cost of diacetylmorphine over the long term, because it can be as much as 10 times greater than conventional methadone maintenance treatment.21 The North American Opiate Medication Initiative was only one year in duration, but a policy to introduce diacetylmorphine might have both positive and negative longer-term implications.We extrapolated outcomes from the North American Opiate Medication Initiative to estimate the long-term cost-effectiveness of diacetylmorphine versus methadone maintenance treatment for chronic, refractory opioid dependence.  相似文献   

12.
13.

Background

High prevalence of infant macrosomia (up to 36%, the highest in the world) has been reported in some First Nations communities in the Canadian province of Quebec and the eastern area of the province of Ontario. We aimed to assess whether infant macrosomia was associated with elevated risks of perinatal and postneonatal mortality among First Nations people in Quebec.

Methods

We calculated risk ratios (RRs) of perinatal and postneonatal mortality by birthweight for gestational age, comparing births to First Nations women (n = 5193) versus women whose mother tongue is French (n = 653 424, the majority reference group) in Quebec 1991–2000.

Results

The prevalence of infant macrosomia (birthweight for gestational age > 90th percentile) was 27.5% among births to First Nations women, which was 3.3 times (confidence interval [CI] 3.2–3.5) higher than the prevalence (8.3%) among births to women whose mother tongue is French. Risk ratios for perinatal mortality among births to First Nations women were 1.8 (95% CI 1.3–2.5) for births with weight appropriate for gestational age, 4.1 (95% CI 2.4–7.0) for small-for-gestational-age (< 10th percentile) births and < 1 (not significant) for macrosomic births compared to births among women whose mother tongue is French. The RRs for postneonatal mortality were 4.3 (95% CI 2.7–6.7) for infants with appropriate-for-gestational-age birthweight and 8.3 (95% CI 4.0–17.0) for infants with macrosomia.

Interpretation

Macrosomia was associated with a generally protective effect against perinatal death, but substantially greater risks of postneonatal death among births to First Nations women in Quebec versus women whose mother tongue is French.A trend toward higher birthweights has emerged in recent decades.13 Reflected in this trend is a rise in the prevalence of infant macrosomia, commonly defined as either a birthweight greater than 4000 g or a birthweight for gestational age greater than the 90th percentile relative to a fetal growth standard.48 Maternal obesity, impaired glucose tolerance and gestational diabetes mellitus are important risk factors for infant macrosomia9,10 and are known to afflict a much higher proportion of people in Aboriginal populations than in the general population.1114 This is true especially for Aboriginal populations in which a traditional lifestyle has changed to a less physically active, modern lifestyle in recent decades. A high prevalence of infant macrosomia (up to 36%, which, to the best of our knowledge, is the highest in the world) has been reported in some First Nations communities of Quebec and eastern Ontario in Canada.1517 However, little is known about the implications of this high prevalence for perinatal and infant health of First Nations people in these regions. We examined whether infant macrosomia was associated with increased risk for perinatal and postneonatal death among First Nations infants in Quebec.  相似文献   

14.
15.

Background:

Morbidity due to cardiovascular disease is high among First Nations people. The extent to which this may be related to the likelihood of coronary angiography is unclear. We examined the likelihood of coronary angiography after acute myocardial infarction (MI) among First Nations and non–First Nations patients.

Methods:

Our study included adults with incident acute MI between 1997 and 2008 in Alberta. We determined the likelihood of angiography among First Nations and non–First Nations patients, adjusted for important confounders, using the Alberta Provincial Project for Outcome Assessment in Coronary Heart Disease (APPROACH) database.

Results:

Of the 46 764 people with acute MI, 1043 (2.2%) were First Nations. First Nations patients were less likely to receive angiography within 1 day after acute MI (adjusted odds ratio [OR] 0.73, 95% confidence interval [CI] 0.62–0.87). Among First Nations and non–First Nations patients who underwent angiography (64.9%), there was no difference in the likelihood of percutaneous coronary intervention (PCI) (adjusted hazard ratio [HR] 0.92, 95% CI 0.83–1.02) or coronary artery bypass grafting (CABG) (adjusted HR 1.03, 95% CI 0.85–1.25). First Nations people had worse survival if they received medical management alone (adjusted HR 1.38, 95% CI 1.07–1.77) or if they underwent PCI (adjusted HR 1.38, 95% CI 1.06–1.80), whereas survival was similar among First Nations and non–First Nations patients who received CABG.

Interpretation:

First Nations people were less likely to undergo angiography after acute MI and experienced worse long-term survival compared with non–First Nations people. Efforts to improve access to angiography for First Nations people may improve outcomes.Although cardiovascular disease has been decreasing in Canada,1 First Nations people have a disproportionate burden of the disease. First Nations people in Canada have a 2.5-fold higher prevalence of cardiovascular disease than non–First Nations people,2 with hospital admissions for cardiovascular-related events also increasing.3The prevalence of cardiovascular disease in First Nations populations is presumed to be reflective of the prevalence of cardiovascular risk factors.47 However, the disproportionate increase in rates of hospital admission suggests that suboptimal management of cardiovascular disease or its risk factors may also influence patient outcomes.2,3 Racial disparities in the quality of cardiovascular care resulting in adverse outcomes have been documented, although most studies have focused on African-American, Hispanic and Asian populations.8,9 As a result, it is unclear whether suboptimal delivery of guideline-recommended treatment contributes to increased cardiovascular morbidity and mortality among First Nations people.1012We undertook a population-based study involving adults with incident acute myocardial infarction (MI) to examine the receipt of guideline-recommended coronary angiography among First Nations and non–First Nations patients.1012 Among patients who underwent angiography, we sought to determine whether there were differences between First Nations and non–First Nations patients in the likelihood of revascularization and long-term survival.  相似文献   

16.

Background:

A hip fracture causes bleeding, pain and immobility, and initiates inflammatory, hypercoagulable, catabolic and stress states. Accelerated surgery may improve outcomes by reducing the duration of these states and immobility. We undertook a pilot trial to determine the feasibility of a trial comparing accelerated care (i.e., rapid medical clearance and surgery) and standard care among patients with a hip fracture.

Methods:

Patients aged 45 years or older who, during weekday, daytime working hours, received a diagnosis of a hip fracture requiring surgery were randomly assigned to receive accelerated or standard care. Our feasibility outcomes included the proportion of eligible patients randomly assigned, completeness of follow-up and timelines of accelerated surgery. The main clinical outcome, assessed by data collectors and adjudicators who were unaware of study group allocations, was a major perioperative complication (i.e., a composite of death, preoperative myocardial infarction, myocardial injury after noncardiac surgery, pulmonary embolism, pneumonia, stroke, and life-threatening or major bleeding) within 30 days of randomization.

Results:

Of patients eligible for inclusion, 80% consented and were randomly assigned to groups (30 to accelerated care and 30 to standard care) at 2 centres in Canada and 1 centre in India. All patients completed 30-day follow-up. The median time from diagnosis to surgery was 6.0 hours in the accelerated care group and 24.2 hours in the standard care group (p < 0.001). A major perioperative complication occurred in 9 (30%) of the patients in the accelerated care group and 14 (47%) of the patients in the standard care group (hazard ratio 0.60, 95% confidence interval 0.26–1.39).

Interpretation:

These results show the feasibility of a trial comparing accelerated and standard care among patients with hip fracture and support a definitive trial. Trial registration: ClinicalTrials.gov, no. NCT01344343.Annually in North America, 0.8% of women and 0.4% of men aged 65 years or older experience a hip fracture.1 Patients who sustain a hip fracture face a high risk of serious complications (i.e., cardiovascular, venous thrombotic, infectious and hemorrhagic)2,3 that can result in a prolonged hospital stay and death: 30-day mortality is 9% among men and 5% among women.1 Among surviving patients who were community-dwelling before their fracture, 11% become bed-ridden and 16% are admitted to a long-term care facility.4A hip fracture results in pain, bleeding and immobility. These factors initiate inflammatory, hypercoagulable, catabolic and stress states that can precipitate medical complications.511 Early surgery shortens the exposure to these harmful states and, therefore, may reduce morbidity and mortality. Furthermore, earlier surgery may shorten the period of immobility, which may improve functional outcomes and reduce costs.A meta-analysis of observational studies evaluating the timing of surgery for a hip fracture included 5 studies (involving 4208 patients and 721 deaths) that reported the adjusted risk of mortality.12 Earlier surgery, irrespective of the cut-off for delay (24, 48 or 72 h), was associated with significantly lower mortality (adjusted relative risk 0.81, 95% confidence interval [CI] 0.68–0.96, p = 0.01). Although these data are encouraging, the apparent benefit may be a result of residual confounding (e.g., sicker patients may have had surgery delayed for medical optimization, which may not have been adequately adjusted for in the analyses). Conversely, the real potential of early surgery may be underestimated because the greatest impact may occur when a hip fracture is treated much more quickly than the timelines assessed in the observational studies (24, 48 or 72 h), similar to how treatment of an acute myocardial infarction or stroke within hours has the most dramatic impact.13,14In many countries, including Canada, most patients with a hip fracture wait longer than 24 hours to undergo surgery. The 2 main reasons for delay are preoperative medical clearance and operating room access,1521 both of which are potentially modifiable. We undertook a pilot trial to determine the feasibility (as assessed by the proportion of eligible patients randomly assigned, completeness of follow-up and timeliness of accelerated surgery) of a large randomized controlled trial (RCT) comparing accelerated care and standard care among adults with a hip fracture.  相似文献   

17.
Robin Skinner  Steven McFaull 《CMAJ》2012,184(9):1029-1034

Background:

Suicide is the second leading cause of death for young Canadians (10–19 years of age) — a disturbing trend that has shown little improvement in recent years. Our objective was to examine suicide trends among Canadian children and adolescents.

Methods:

We conducted a retrospective analysis of standardized suicide rates using Statistics Canada mortality data for the period spanning from 1980 to 2008. We analyzed the data by sex and by suicide method over time for two age groups: 10–14 year olds (children) and 15–19 year olds (adolescents). We quantified annual trends by calculating the average annual percent change (AAPC).

Results:

We found an average annual decrease of 1.0% (95% confidence interval [CI] −1.5 to −0.4) in the suicide rate for children and adolescents, but stratification by age and sex showed significant variation. We saw an increase in suicide by suffocation among female children (AAPC = 8.1%, 95% CI 6.0 to 10.4) and adolescents (AAPC = 8.0%, 95% CI 6.2 to 9.8). In addition, we noted a decrease in suicides involving poisoning and firearms during the study period.

Interpretation:

Our results show that suicide rates in Canada are increasing among female children and adolescents and decreasing among male children and adolescents. Limiting access to lethal means has some potential to mitigate risk. However, suffocation, which has become the predominant method for committing suicide for these age groups, is not amenable to this type of primary prevention.Suicide was ranked as the second leading cause of death among Canadians aged 10–34 years in 2008.1 It is recognized that suicidal behaviour and ideation is an important public health issue among children and adolescents; disturbingly, suicide is a leading cause of Canadian childhood mortality (i.e., among youths aged 10–19 years).2,3Between 1980 and 2008, there were substantial improvements in mortality attributable to unintentional injury among 10–19 year olds, with rates decreasing from 37.7 per 100 000 to 10.7 per 100 000; suicide rates, however, showed less improvement, with only a small reduction during the same period (from 6.2 per 100 000 in 1980 to 5.2 per 100 000 in 2008).1Previous studies that looked at suicides among Canadian adolescents and young adults (i.e., people aged 15–25 years) have reported rates as being generally stable over time, but with a marked increase in suicides by suffocation and a decrease in those involving firearms.2 There is limited literature on self-inflicted injuries among children 10–14 years of age in Canada and the United States, but there appears to be a trend toward younger children starting to self-harm.3,4 Furthermore, the trend of suicide by suffocation moving to younger ages may be partly due to cases of the “choking game” (self-strangulation without intent to cause permanent harm) that have been misclassified as suicides.57Risk factors for suicidal behaviour and ideation in young people include a psychiatric diagnosis (e.g., depression), substance abuse, past suicidal behaviour, family factors and other life stressors (e.g., relationships, bullying) that have complex interactions.8 A suicide attempt involves specific intent, plans and availability of lethal means, such as firearms,9 elevated structures10 or substances.11 The existence of “pro-suicide” sites on the Internet and in social media12 may further increase risk by providing details of various ways to commit suicide, as well as evaluations ranking these methods by effectiveness, amount of pain involved and length of time to produce death.1315Our primary objective was to present the patterns of suicide among children and adolescents (aged 10–19 years) in Canada.  相似文献   

18.

Background:

Despite a low prevalence of chronic kidney disease (estimated glomerular filtration rate [GFR] < 60 mL/min per 1.73 m2), First Nations people have high rates of kidney failure requiring chronic dialysis or kidney transplantation. We sought to examine whether the presence and severity of albuminuria contributes to the progression of chronic kidney disease to kidney failure among First Nations people.

Methods:

We identified all adult residents of Alberta (age ≥ 18 yr) for whom an outpatient serum creatinine measurement was available from May 1, 2002, to Mar. 31, 2008. We determined albuminuria using urine dipsticks and categorized results as normal (i.e., no albuminuria), mild, heavy or unmeasured. Our primary outcome was progression to kidney failure (defined as the need for chronic dialysis or kidney transplantation, or a sustained doubling of serum creatinine levels). We calculated rates of progression to kidney failure by First Nations status, by estimated GFR and by albuminuria category. We determined the relative hazard of progression to kidney failure for First Nations compared with non–First Nations participants by level of albuminuria and estimated GFR.

Results:

Of the 1 816 824 participants we identified, 48 669 (2.7%) were First Nations. First Nations people were less likely to have normal albuminuria compared with non–First Nations people (38.7% v. 56.4%). Rates of progression to kidney failure were consistently 2- to 3-fold higher among First Nations people than among non–First Nations people, across all levels of albuminuria and estimated GFRs. Compared with non–First Nations people, First Nations people with an estimated GFR of 15.0–29.9 mL/min per 1.73 m2 had the highest risk of progression to kidney failure, with similar hazard ratios for those with normal and heavy albuminuria.

Interpretation:

Albuminuria confers a similar risk of progression to kidney failure for First Nations and non–First Nations people.Severe chronic kidney disease (estimated glomerular filtration rate [GFR] < 30 mL/min per 1.73 m2) is almost 2-fold higher, and rates of end-stage kidney disease (defined as the need for chronic dialysis or kidney transplantation) are 4-fold higher, among First Nations people compared with non–First Nations people in Canada.1,2 The reasons for the higher rate of end-stage kidney disease when there is a lower prevalence of earlier stages of chronic kidney disease in First Nations people (estimated GFR 30–60 mL/min per 1.73 m2) are unclear. The rising incidence of diabetes is seen as the major cause of kidney failure among First Nations people;3 however, First Nations people without diabetes are also 2–3 times more likely to eventually have kidney failure.4 These observations suggest that diabetes is not the sole determinant of risk for kidney failure and that there are yet undefined factors that may accelerate the progression of chronic kidney disease in the First Nations population.5Recent studies have highlighted the prognostic importance of albuminuria as a risk factor for kidney failure.6 Although ethnic variations in the prevalence and severity of albuminuria and their association with renal outcomes have been reported, these studies are primarily limited to non–First Nations populations.7 A limited number of studies have reported an increased prevalence of albuminuria among First Nations people, suggesting the potential association between albuminuria and risk of kidney failure.8,9 We sought to measure the presence and severity of albuminuria and estimate the risk of progression to kidney failure for First Nations people compared with non–First Nations people using a community-based cohort.  相似文献   

19.

Background:

Some children feel pain during wound closures using tissue adhesives. We sought to determine whether a topically applied analgesic solution of lidocaine–epinephrine–tetracaine would decrease pain during tissue adhesive repair.

Methods:

We conducted a randomized, placebo-controlled, blinded trial involving 221 children between the ages of 3 months and 17 years. Patients were enrolled between March 2011 and January 2012 when presenting to a tertiary-care pediatric emergency department with lacerations requiring closure with tissue adhesive. Patients received either lidocaine–epinephrine–tetracaine or placebo before undergoing wound closure. Our primary outcome was the pain rating of adhesive application according to the colour Visual Analogue Scale and the Faces Pain Scale — Revised. Our secondary outcomes were physician ratings of difficulty of wound closure and wound hemostasis, in addition to their prediction as to which treatment the patient had received.

Results:

Children who received the analgesic before wound closure reported less pain (median 0.5, interquartile range [IQR] 0.25–1.50) than those who received placebo (median 1.00, IQR 0.38–2.50) as rated using the colour Visual Analogue Scale (p = 0.01) and Faces Pain Scale – Revised (median 0.00, IQR 0.00–2.00, for analgesic v. median 2.00, IQR 0.00–4.00, for placebo, p < 0.01). Patients who received the analgesic were significantly more likely to report having or to appear to have a pain-free procedure (relative risk [RR] of pain 0.54, 95% confidence interval [CI] 0.37–0.80). Complete hemostasis of the wound was also more common among patients who received lidocaine–epinephrine–tetracaine than among those who received placebo (78.2% v. 59.3%, p = 0.008).

Interpretation:

Treating minor lacerations with lidocaine–epinephrine–tetracaine before wound closure with tissue adhesive reduced ratings of pain and increased the proportion of pain-free repairs among children aged 3 months to 17 years. This low-risk intervention may benefit children with lacerations requiring tissue adhesives instead of sutures. Trial registration: ClinicalTrials.gov, no. PR 6138378804.Minor laceration repair with tissue adhesive, or “skin glue,” is common in pediatrics. Although less painful than cutaneous sutures,1 tissue adhesives polymerize through an exothermic reaction that may cause a burning, painful sensation. Pain is dependent on the specific formulation of the adhesive used and the method of application. One study of different tissue adhesives reported 23.8%–40.5% of participants feeling a “burning sensation”,2 whereas another study reported “pain” in 17.6%–44.1% of children.3 The amounts of adhesive applied, method of application and individual patient characteristics can also influence the feeling of pain.3,4 Because tissue adhesives polymerize on contact with moisture,4,5 poor wound hemostasis has the potential to cause premature setting of the adhesive, leading to less efficient and more painful repairs.6Preventing procedural pain is a high priority in pediatric care.7 Inadequate analgesia for pediatric procedures may result in more complicated procedures, increased pain sensitivity with future procedures8 and increased fear and anxiety of medical experiences persisting into adulthood.9 A practical method to prevent pain during laceration repairs with tissue adhesive would have a substantial benefit for children.A topically applied analgesic solution containing lidocaine–epinephrine–tetracaine with vasoconstrictive properties provides safe and effective pain control during wound repair using sutures.10 A survey of pediatric emergency fellowship directors in the United States reported that 76% of respondents use this solution or a similar solution when suturing 3-cm chin lacerations in toddlers.11 However, in a hospital chart review, this solution was used in less than half of tissue adhesive repairs, the remainder receiving either local injection of anesthetic or no pain control.12 Reluctance to use lidocaine–epinephrine–tetracaine with tissue adhesive may be due to the perception that it is not worth the minimum 20-minute wait required for the analgesic to take effect13 or to a lack of awareness that tissue adhesives can cause pain.We sought to investigate whether preapplying lidocaine–epinephrine–tetracaine would decrease pain in children during minor laceration repair using tissue adhesive.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号