首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.

Background

In 1992, British American Tobacco had its Canadian affiliate, Imperial Tobacco Canada, destroy internal research documents that could expose the company to liability or embarrassment. Sixty of these destroyed documents were subsequently uncovered in British American Tobacco’s files.

Methods

Legal counsel for Imperial Tobacco Canada provided a list of 60 destroyed documents to British American Tobacco. Information in this list was used to search for copies of the documents in British American Tobacco files released through court disclosure. We reviewed and summarized this information.

Results

Imperial Tobacco destroyed documents that included evidence from scientific reviews prepared by British American Tobacco’s researchers, as well as 47 original research studies, 35 of which examined the biological activity and carcinogenicity of tobacco smoke. The documents also describe British American Tobacco research on cigarette modifications and toxic emissions, including the ways in which consumers adapted their smoking behaviour in response to these modifications. The documents also depict a comprehensive research program on the pharmacology of nicotine and the central role of nicotine in smoking behaviour. British American Tobacco scientists noted that “… the present scale of the tobacco industry is largely dependent on the intensity and nature of the pharmacological action of nicotine,” and that “... should nicotine become less attractive to smokers, the future of the tobacco industry would become less secure.”

Interpretation

The scientific evidence contained in the documents destroyed by Imperial Tobacco demonstrates that British American Tobacco had collected evidence that cigarette smoke was carcinogenic and addictive. The evidence that Imperial Tobacco sought to destroy had important implications for government regulation of tobacco.On May 8, 1998, the US State of Minnesota reached a historic settlement with the tobacco industry.1 As part of the settlement, the 7 tobacco manufacturers named in the trial were ordered to pay more than $200 billion dollars and to make public over 40 million pages of internal tobacco industry documents. These documents have provided a wealth of information about the conduct of the tobacco industry, the health effects of smoking and the role of cigarette design in promoting addiction.2A number of the most sensitive documents were concealed or destroyed before the trial as the threat of litigation grew.3,4 Based on advice from their lawyers, companies such as British American Tobacco instituted a policy of document destruction.5 A.G. Thomas, the head of Group Security at British American Tobacco, explained the criteria for selecting reports for destruction: “In determining whether a redundant document contains sensitive information, holders should apply the rule of thumb of whether the contents would harm or embarrass the Company or an individual if they were to be made public.”6British American Tobacco’s destruction policy was most rigorously pursued by its subsidiaries in the United States, Canada and Australia, likely because of the imminent threat of litigation in these countries. The policy was developed following the 1989 decision by a Canadian judge to give Canadian government representatives access to scientific research conducted by Imperial Tobacco Canada and its principal shareholder, British American Tobacco.7 This ruling prompted British American Tobacco to undertake steps to prevent scientists in its affiliate companies from retaining industry studies and to require the destruction of sensitive documents.8,9 Canadian scientists were the most resistant to this policy,10 but they too agreed to destroy their copies of British American Tobacco’s scientific research.11In a letter dated June 5, 1992, a lawyer working on behalf of Imperial Tobacco Canada informed British American Tobacco that Imperial would destroy copies of 60 documents in compliance with the document destruction policy, and he provided reference numbers for each of these documents.12 This memo was one of the earlier industry documents to be made public, and it became a key document in legal arguments about the destruction of evidence.13 The contents of the destroyed documents to which it referred, however, had never been analyzed. All that was known was that they contained what British American Tobacco considered “sensitive” research results.14 A list of the 60 documents is available in Appendix 1 (www.cmaj.ca/cgi/content/full/cmaj.080566/DC1). Appendix 2 (www.cmaj.ca/cgi/content/full/cmaj.080566/DC1) includes summaries of 3 documents not otherwise discussed that explore the transfer of the flavouring additive coumarin to tobacco smoke.1517What was the nature of the 60 reports that Imperial and British American Tobacco wanted destroyed? Although Imperial dutifully destroyed its copies of these sensitive documents, other copies of the same documents were stored at British American Tobacco headquarters in the United Kingdom and were released in 1998 through court disclosure in the Minnesota Trial and subsequent legal proceedings.1 We searched British American Tobacco’s archives for each of the 60 reports using the research numbers included in the original letter.12 In this article, we present the contents of these research reports.  相似文献   

2.

Background

The Canadian CT Head Rule was developed to allow physicians to be more selective when ordering computed tomography (CT) imaging for patients with minor head injury. We sought to evaluate the effectiveness of implementing this validated decision rule at multiple emergency departments.

Methods

We conducted a matched-pair cluster-randomized trial that compared the outcomes of 4531 patients with minor head injury during two 12-month periods (before and after) at hospital emergency departments in Canada, six of which were randomly allocated as intervention sites and six as control sites. At the intervention sites, active strategies, including education, changes to policy and real-time reminders on radiologic requisitions were used to implement the Canadian CT Head Rule. The main outcome measure was referral for CT scan of the head.

Results

Baseline characteristics of patients were similar when comparing control to intervention sites. At the intervention sites, the proportion of patients referred for CT imaging increased from the “before” period (62.8%) to the “after” period (76.2%) (difference +13.3%, 95% CI 9.7%–17.0%). At the control sites, the proportion of CT imaging usage also increased, from 67.5% to 74.1% (difference +6.7%, 95% CI 2.6%–10.8%). The change in mean imaging rates from the “before” period to the “after” period for intervention versus control hospitals was not significant (p = 0.16). There were no missed brain injuries or adverse outcomes.

Interpretation

Our knowledge–translation-based trial of the Canadian CT Head Rule did not reduce rates of CT imaging in Canadian emergency departments. Future studies should identify strategies to deal with barriers to implementation of this decision rule and explore more effective approaches to knowledge translation. (ClinicalTrials.gov trial register no. NCT00993252)More than six million instances of head and neck trauma are seen annually in emergency departments in Canada and the United States.1 Most are classified as minimal or minor head injury, but in a very small proportion, deterioration occurs and neurosurgical intervention is needed for intracranial hematoma.2,3 In recent years, North American use of computed tomography (CT) for many conditions in the emergency department, including minor head injury, has increased five-fold.1,4 Our own Canadian data showed marked variation in the use of CT for similar patients.5 Over 90% of CT scans are negative for clinically important brain injury.68 Owing to its high volume of usage, such imaging adds to health care costs. There have also been increasing concerns about radiation-related risk from unnecessary CT scans.9,10 Additionally, unnecessary use of CT scanning compounds the Canadian problems of overcrowding of emergency departments and inadequate access to advanced imaging for nonemergency outpatients.Clinical decision rules are derived from original research and may be defined as tools for clinical decision-making that incorporate three or more variables from a patient’s history, physical examination or simple tests.1113 The Canadian CT Head Rule comprises five high-risk and two medium-risk criteria and was derived by prospectively evaluating 3121 adults with minor head injury (Figure 1) (Appendix 1, available at www.cmaj.ca/cgi/content/full/cmaj.091974/DC1).6 The resultant decision rule was then prospectively validated in a group of 2707 patients and showed high sensitivity (100%; 95% confidence interval [CI ] 91–100) and reliability.14 The results of its validation suggested that, in patients presenting to emergency departments with minor head trauma, a rate of usage of CT imaging as low as 62.4% was possible and safe.Open in a separate windowFigure 1The Canadian CT Head Rule, as used in the study. Note: CSF = cerebrospinal fluid, CT = computed tomography, GCS = Glasgow Coma Scale.Unfortunately, most decision rules are never used after derivation because they are not adequately tested in validation or implementation studies.1519 We recently successfully implemented a similar rule, the Canadian C-Spine Rule, at multiple Canadian sites.20 Hence, the goal of the current study was to evaluate the effectiveness and safety of an active strategy to implement the Canadian CT Head Rule at multiple emergency departments. We wanted to test both the impact of the rule on rates of CT imaging and the effectiveness of an inexpensive and easily adopted implementation strategy. In addition, we wanted to further evaluate the accuracy of the rule.  相似文献   

3.
4.

Background

Published decision analyses show that screening for colorectal cancer is cost-effective. However, because of the number of tests available, the optimal screening strategy in Canada is unknown. We estimated the incremental cost-effectiveness of 10 strategies for colorectal cancer screening, as well as no screening, incorporating quality of life, noncompliance and data on the costs and benefits of chemotherapy.

Methods

We used a probabilistic Markov model to estimate the costs and quality-adjusted life expectancy of 50-year-old average-risk Canadians without screening and with screening by each test. We populated the model with data from the published literature. We calculated costs from the perspective of a third-party payer, with inflation to 2007 Canadian dollars.

Results

Of the 10 strategies considered, we focused on three tests currently being used for population screening in some Canadian provinces: low-sensitivity guaiac fecal occult blood test, performed annually; fecal immunochemical test, performed annually; and colonoscopy, performed every 10 years. These strategies reduced the incidence of colorectal cancer by 44%, 65% and 81%, and mortality by 55%, 74% and 83%, respectively, compared with no screening. These strategies generated incremental cost-effectiveness ratios of $9159, $611 and $6133 per quality-adjusted life year, respectively. The findings were robust to probabilistic sensitivity analysis. Colonoscopy every 10 years yielded the greatest net health benefit.

Interpretation

Screening for colorectal cancer is cost-effective over conventional levels of willingness to pay. Annual high-sensitivity fecal occult blood testing, such as a fecal immunochemical test, or colonoscopy every 10 years offer the best value for the money in Canada.Colorectal cancer is the fourth most common cancer diagnosed in North America and the second leading cause of cancer death.1,2 An effective population-based screening program is likely to decrease mortality associated with colorectal cancer36 through earlier detection and to decrease incidence by allowing removal of precursor colorectal adenomas.7,8 Professional societies and government-sponsored committees have released guidelines for screening of average-risk individuals for colorectal cancer by means of several testing options.912 These tests vary in sensitivity, specificity, risk, costs and availability. With no published studies designed to directly compare screening strategies, decision analysis is a useful technique for examining the relative cost-effectiveness of these strategies.1321 Previous studies have shown that screening for colorectal cancer is cost-effective at conventional levels of willingness to pay, but no single strategy has emerged as clinically superior or economically dominant.22 The interpretations of economic evaluations in this area have been limited because investigators have not simultaneously accounted for the positive effects of screening on quality of life, the effect of noncompliance with screening schedules, and the greater efficacy and cost of more modern chemotherapy regimens for colorectal cancer. Furthermore, no study has included all of the strategies recommended in the 2008 guidelines of the US Multi-Society Task Force on Colorectal Cancer.10Our objective was to estimate the incremental cost-effectiveness of 10 strategies for colorectal cancer screening, as well as the absence of a screening program. The current study is more complete than earlier studies because we included information on quality of life, noncompliance with screening and the efficacy observed in recent randomized trials of colorectal cancer treatments. The complete model is available in Appendix 1 (available at www.cmaj.ca/cgi/content/full/cmaj.090845/DC1). This article focuses on the comparison of no screening and three screening strategies:1 low-sensitivity guaiac fecal occult blood test,2 performed annually; fecal immunochemical test,3 performed annually; and colonoscopy, performed every 10 years. These three tests are currently being used or considered for population-based screening of average-risk individuals in some Canadian provinces.  相似文献   

5.

Background:

The importance of chronic inflammation as a determinant of aging phenotypes may have been underestimated in previous studies that used a single measurement of inflammatory markers. We assessed inflammatory markers twice over a 5-year exposure period to examine the association between chronic inflammation and future aging phenotypes in a large population of men and women.

Methods:

We obtained data for 3044 middle-aged adults (28.2% women) who were participating in the Whitehall II study and had no history of stroke, myocardial infarction or cancer at our study’s baseline (1997–1999). Interleukin-6 was measured at baseline and 5 years earlier. Cause-specific mortality, chronic disease and functioning were ascertained from hospital data, register linkage and clinical examinations. We used these data to create 4 aging phenotypes at the 10-year follow-up (2007–2009): successful aging (free of major chronic disease and with optimal physical, mental and cognitive functioning), incident fatal or nonfatal cardiovascular disease, death from noncardiovascular causes and normal aging (all other participants).

Results:

Of the 3044 participants, 721 (23.7%) met the criteria for successful aging at the 10-year follow-up, 321 (10.6%) had cardiovascular disease events, 147 (4.8%) died from noncardiovascular causes, and the remaining 1855 (60.9%) were included in the normal aging phenotype. After adjustment for potential confounders, having a high interleukin-6 level (> 2.0 ng/L) twice over the 5-year exposure period nearly halved the odds of successful aging at the 10-year follow-up (odds ratio [OR] 0.53, 95% confidence interval [CI] 0.38–0.74) and increased the risk of future cardiovascular events (OR 1.64, 95% CI 1.15–2.33) and noncardiovascular death (OR 2.43, 95% CI 1.58–3.80).

Interpretation:

Chronic inflammation, as ascertained by repeat measurements, was associated with a range of unhealthy aging phenotypes and a decreased likelihood of successful aging. Our results suggest that assessing long-term chronic inflammation by repeat measurement of interleukin-6 has the potential to guide clinical practice. Chronic inflammation has been implicated in the pathogenesis of age-related conditions, 1 such as type 2 diabetes, 2 , 3 cardiovascular disease, 4 cognitive impairment 5 and brain atrophy. 6 Chronic inflammation may result from or be a cause of age-related disease processes (illustrated in Appendix 1, available at www.cmaj.ca/lookup/suppl/doi:10.1503/cmaj.122072/-/DC1 ). For example, obesity increases inflammation, and chronic inflammation, in turn, contributes to the development of type 2 diabetes by inducing insulin resistance, 7 , 8 and to coronary artery disease by promoting atherogenesis. 9 Thus, raised levels of inflammation appear to be implicated in various pathological processes leading to diseases in older age. Of the various markers of systemic inflammation, interleukin-6 is particularly relevant to aging outcomes. There is increasing evidence that interleukin-6 is the pro-inflammatory cytokine that “drives” downstream inflammatory markers, such as C-reactive protein and fibrinogen. 10 , 11 Interleukin-6, in contrast to C-reactive protein and fibrinogen, is also likely to play a causal role in aging owing to its direct effects on the brain and skeletal muscles. 12 , 13 In addition, results of Mendelian randomization studies of interleukin-6 and studies of antagonists are consistent with a causal role for interleukin-6 in relation to coronary artery disease, again in contrast to C-reactive protein and fibrinogen. 14 However, current understanding of the link between chronic inflammation and aging phenotypes is hampered by the methodologic limitations of many existing studies. Most studies reported an assessment of inflammation based on a single measurement, precluding a distinction between the short-term (acute) and longer-term (chronic) impact of the inflammatory process on disease outcomes. 7 We conducted a study using 2 measurements of interleukin-6 obtained about 5 years apart to examine the association between chronic inflammation and aging phenotypes assessed 10 years later in a large population of men and women. Because inflammation characterizes a wide range of pathological processes, we considered several aging phenotypes, including cardiovascular disease (fatal and nonfatal), death from noncardiovascular causes and successful aging (optimal functioning across different physical, mental and cognitive domains).  相似文献   

6.

Background:

A link between obstructive sleep apnea and cancer development or progression has been suggested, possibly through chronic hypoxemia, but supporting evidence is limited. We examined the association between the severity of obstructive sleep apnea and prevalent and incident cancer, controlling for known risk factors for cancer development.

Methods:

We included all adults referred with possible obstructive sleep apnea who underwent a first diagnostic sleep study at a single large academic hospital between 1994 and 2010. We linked patient data with data from Ontario health administrative databases from 1991 to 2013. Cancer diagnosis was derived from the Ontario Cancer Registry. We assessed the cross-sectional association between obstructive sleep apnea and prevalent cancer at the time of the sleep study (baseline) using logistic regression analysis. Cox regression models were used to investigate the association between obstructive sleep apnea and incident cancer among patients free of cancer at baseline.

Results:

Of 10 149 patients who underwent a sleep study, 520 (5.1%) had a cancer diagnosis at baseline. Over a median follow-up of 7.8 years, 627 (6.5%) of the 9629 patients who were free of cancer at baseline had incident cancer. In multivariable regression models, the severity of sleep apnea was not significantly associated with either prevalent or incident cancer after adjustment for age, sex, body mass index and smoking status at baseline (apnea–hypopnea index > 30 v. < 5: adjusted odds ratio [OR] 0.96, 95% confidence interval [CI] 0.71–1.30, for prevalent cancer, and adjusted hazard ratio [HR] 1.02, 95% CI 0.80–1.31, for incident cancer; sleep time spent with oxygen saturation < 90%, per 10-minute increase: adjusted OR 1.01, 95% CI 1.00–1.03, for prevalent cancer, and adjusted HR 1.00, 95% CI 0.99–1.02, for incident cancer).

Interpretation:

In a large cohort, the severity of obstructive sleep apnea was not independently associated with either prevalent or incident cancer. Additional studies are needed to elucidate whether there is an independent association with specific types of cancer.Obstructive sleep apnea is a sleep-related breathing disorder characterized by repetitive episodes of upper-airway obstruction during sleep. Through sleep fragmentation, hypoxemia, hypercapnia, swings in intrathoracic pressure and increased sympathetic activity, these episodes lead to symptoms and health consequences.1 In 2009, 23% of Canadian adults reported risk factors for obstructive sleep apnea, and 5% of the population 45 years and older reported being told by a health professional that they had the condition.2Obstructive sleep apnea has been postulated to cause cancer3,4 or cancer progression,5 possibly through chronic intermittent hypoxemia,6 thus making it a potential modifiable risk factor for cancer development.7 However, the longitudinal evidence on this association is limited. Four cohort studies evaluated the longitudinal association between obstructive sleep apnea (expressed by the apnea–hypopnea index, oxygen desaturation or symptoms) and cancer development or cancer-related mortality (Appendix 1, available at www.cmaj.ca/lookup/suppl/doi:10.1503/cmaj.140238/-/DC1).35,8 All had limitations. Of the 3 that reported a positive association,3,5,8 2 studies included a small number of participants with severe obstructive sleep apnea, had a relatively small number of events and did not consider competing risk of death from other causes;5,8 and 2 used less reliable sleep-testing devices to define obstructive sleep apnea,3,8 which may have introduced measurement bias. In the only study that did not show an association between obstructive sleep apnea and cancer,4 the former was diagnosed on the basis of self-reported symptoms, which could have resulted in misclassification of exposure.There is a need for a sufficiently large cohort study with a long enough follow-up to allow for the potential development of cancer that adjusts for important potential confounders, examines common cancer subtypes and has a rigorous assessment of both obstructive sleep apnea and cancer.7,9,10 Our study was designed to improve upon the methods of published studies. We examined the association between the severity of obstructive sleep apnea (expressed by the apnea–hypopnea index or oxygen desaturation) and prevalent or incident cancer after controlling for known cancer risk factors.  相似文献   

7.

Background:

Meta-analyses of continuous outcomes typically provide enough information for decision-makers to evaluate the extent to which chance can explain apparent differences between interventions. The interpretation of the magnitude of these differences — from trivial to large — can, however, be challenging. We investigated clinicians’ understanding and perceptions of usefulness of 6 statistical formats for presenting continuous outcomes from meta-analyses (standardized mean difference, minimal important difference units, mean difference in natural units, ratio of means, relative risk and risk difference).

Methods:

We invited 610 staff and trainees in internal medicine and family medicine programs in 8 countries to participate. Paper-based, self-administered questionnaires presented summary estimates of hypothetical interventions versus placebo for chronic pain. The estimates showed either a small or a large effect for each of the 6 statistical formats for presenting continuous outcomes. Questions addressed participants’ understanding of the magnitude of treatment effects and their perception of the usefulness of the presentation format. We randomly assigned participants 1 of 4 versions of the questionnaire, each with a different effect size (large or small) and presentation order for the 6 formats (1 to 6, or 6 to 1).

Results:

Overall, 531 (87.0%) of the clinicians responded. Respondents best understood risk difference, followed by relative risk and ratio of means. Similarly, they perceived the dichotomous presentation of continuous outcomes (relative risk and risk difference) to be most useful. Presenting results as a standardized mean difference, the longest standing and most widely used approach, was poorly understood and perceived as least useful.

Interpretation:

None of the presentation formats were well understood or perceived as extremely useful. Clinicians best understood the dichotomous presentations of continuous outcomes and perceived them to be the most useful. Further initiatives to help clinicians better grasp the magnitude of the treatment effect are needed.Health professionals increasingly rely on summary estimates from systematic reviews and meta-analyses to guide their clinical decisions and to provide information for shared decision-making. Meta-analyses of clinical trials typically provide the information necessary for decision-makers to evaluate the extent to which chance can explain apparent intervention effects (i.e., statistical significance). However, interpreting the magnitude of the treatment effect — from trivial to large — particularly for continuous outcome measures, can be challenging.Such challenges include decision-makers’ unfamiliarity with the instruments used to measure the outcome. For instance, without further information, clinicians may have difficulty grasping the importance of a 5-point difference on the Short-Form Health Survey-36 (SF-36) or a 1-point difference on a visual analogue scale for pain.1 Second, trials often use different instruments to measure the same construct. For instance, investigators may measure physical function among patients with arthritis using 1 of 5 instruments (the Western Ontario and McMaster Universities Arthritis Index using either a visual analogue or Likert scale; the Arthritis Impact Measurement Scale; the SF-36 Physical Function; or the Lequesne index).2,3Authors have several options for pooling results of continuous outcomes. When all trials have used the same instrument to measure outcomes such as physical function or pain, the most straightforward method is to present the mean difference in natural units between the intervention and control groups. When trialists have used different instruments to measure the same construct, authors of systematic reviews typically report differences between intervention and control groups in standard deviation units, an approach known as the standardized mean difference (SMD). This approach involves dividing the mean difference in each trial by the pooled standard deviation for that trial’s outcome.4For meta-analyses of outcomes measured using different instruments, presenting results as an SMD is the longest standing and most widely used approach and is recommended in the Cochrane handbook for systematic reviews of interventions.4 Limitations of this approach include, however, statistical bias toward decreased treatment effects,5,6 the possibility that decision-makers will find the measure difficult to interpret7,8 and the possibility that the same treatment effect will appear different depending on whether the study population had similar results in the measure of interest (i.e., if homogeneous, a small standard deviation) or varied greatly in the measure of interest (i.e., if heterogeneous, a large standard deviation).9,10Several research groups have proposed alternative statistical formats for presenting continuous outcomes from meta-analyses that they postulate clinicians will more easily interpret.68,1116 The Grading of Recommendations Assessment, Development and Evaluation (GRADE) Working Group recently provided an overview of methods for presenting pooled continuous data.9,10 These alternatives (Appendix 1, available at www.cmaj.ca/lookup/suppl/doi:10.1503/cmaj.150430/-/DC1), although intuitively compelling, have seen limited use.We conducted a survey to determine clinicians’ understanding of the magnitude of treatment effect for 6 approaches to the presentation of continuous outcomes from meta-analyses, as well as their perceptions of the usefulness of each approach for clinical decision-making. We also evaluated whether their understanding and perceptions of usefulness were influenced by country, medical specialty, clinical experience or training in health research methodology.  相似文献   

8.
9.

Background

Studies of cardiac resynchronization therapy in addition to an implantable cardioverter defibrillator in patients with mild to moderate congestive heart failure had not been shown to reduce mortality until the recent RAFT trial (Resynchronization/Defibrillation for Ambulatory Heart Failure Trial). We performed a meta-analysis including the RAFT trial to determine the effect of cardiac resynchronization therapy with or without an implantable defibrillator on mortality.

Methods

We searched electronic databases and other sources for reports of randomized trials using a parallel or crossover design. We included studies involving patients with heart failure receiving optimal medical therapy that compared cardiac resynchronization therapy with optimal medical therapy alone, or cardiac resynchronization therapy plus an implantable defibrillator with a standard implantable defibrillator. The primary outcome was mortality. The optimum information size was considered to assess the minimum amount of information required in the literature to reach reliable conclusions about cardiac resynchronization therapy.

Results

Of 3071 reports identified, 12 studies (n = 7538) were included in our meta-analysis. Compared with optimal medical therapy alone, cardiac resynchronization therapy plus optimal medical therapy significantly reduced mortality (relative risk [RR] 0.73, 95% confidence interval [CI] 0.62–0.85). Compared with an implantable defibrillator alone, cardiac resynchronization therapy plus an implantable defibrillator significantly reduced mortality (RR 0.83, 95% CI 0.72–0.96). This last finding remained significant among patients with New York Heart Association (NYHA) class I or II disease (RR 0.80, 95% CI 0.67–0.96) but not among those with class III or IV disease (RR 0.84, 95% CI 0.69–1.07). Analysis of the optimum information size showed that the sequential monitoring boundary was crossed, which suggests no need for further clinical trials.

Interpretation

The cumulative evidence is now conclusive that the addition of cardiac resynchronization to optimal medical therapy or defibrillator therapy significantly reduces mortality among patients with heart failure.Congestive heart failure is currently reaching epidemic proportions in Canada, with 500 000 Canadians affected and 50 000 new patients identified each year.1 It accounts for more than 100 000 hospital admissions per year and has a one-year mortality ranging from 15% to 50%, depending on the severity of heart failure.2 By 2050, the number of patients with heart failure is projected to increase threefold.2Advances in medical therapies have resulted in substantial reductions in mortality associated with congestive heart failure.37 The use of devices has recently become an important adjuvant therapy.8 Cardiac resynchronization therapy involves pacing from both the right and left ventricles simultaneously to improve myocardial efficiency (see radiographs in Appendix 1, at www.cmaj.ca/cgi/content/full/cmaj.101685/DC1). Cardiac resynchronization therapy has been shown to reduce morbidity and, when compared with medical therapy alone, to reduce mortality.913 Until recently, it was not shown to reduce mortality among patients who also received an implantable cardioverter defibrillator. Among patients receiving optimal medical therapy, the Resynchronization/Defibrillation for Ambulatory Heart Failure Trial (RAFT) showed the superiority of cardiac resynchronization therapy in addition to an implantable defibrillator over a standard implantable defibrillator in reducing mortality and the combined outcome of death from any cause or hospital admission related to heart failure.14We performed a meta-analysis to further assess the effect on mortality of cardiac resynchronization therapy with and without an implantable defibrillator among patients with mildly symptomatic and advanced heart failure.  相似文献   

10.

Background

Proven efficacious therapies are sometimes underused in patients with chronic cardiac conditions, resulting in suboptimal outcomes. We evaluated whether evidence summaries, which were either unsigned or signed by local opinion leaders, improved the quality of secondary prevention care delivered by primary care physicians of patients with coronary artery disease.

Methods

We performed a randomized trial, clustered at the level of the primary care physician, with 3 study arms: control, unsigned statements or opinion leader statements. The statements were faxed to primary care physicians of adults with coronary artery disease at the time of elective cardiac catheterization. The primary outcome was improvement in statin management (initiation or dose increase) 6 months after catheterization.

Results

We enrolled 480 adults from 252 practices. Although statin use was high at baseline (n = 316 [66%]), most patients were taking a low dose (mean 32% of the guideline-recommended dose), and their low-density lipoprotein (LDL) cholesterol levels were elevated (mean 3.09 mmol/L). Six months after catheterization, statin management had improved in 79 of 157 patients (50%) in the control arm, 85 of 158 (54%) patients in the unsigned statement group (adjusted odds ratio [OR] 1.18, 95% CI 0.71–1.94, p = 0.52) and 99 of 165 (60%) patients in the opinion leader statement group (adjusted OR 1.51, 95% CI 0.94–2.42, p = 0.09). The mean fasting LDL cholesterol levels after 6 months were similar in all 3 study arms: 2.35 (standard deviation [SD] 0.86) mmol/L in the control arm compared with 2.24 (SD 0.73) among those in the opinion leader group (p = 0.48) and 2.19 (SD 0.68) in the unsigned statement group (p = 0.32).

Interpretation

Faxed evidence reminders for primary care physicians, even when endorsed by local opinion leaders, were insufficient to optimize the quality of care for adults with coronary artery disease. ClinicalTrials.gov trial register no. NCT00175240.Despite the abundant evidence base for the secondary prevention of coronary artery disease,1 many of these therapies are underused in clinical practice.14 These gaps between evidence and clinical reality are linked to poor outcomes for patients.5 Improved uptake of secondary-prevention therapies would reduce cardiac morbidity and mortality.6 However, most quality-improvement initiatives in coronary artery disease have focused on patients in hospital. Few studies have evaluated means of translating evidence into clinical practice for outpatients cared for by primary care physicians.7Previously,8 we developed and tested the Local Opinion Leader Statement (Appendix 1, available at www.cmaj.ca/cgi/content/full/cmaj.090917/DC1), a quality-improvement tool consisting of a 1-page summary of evidence with explicit treatment advice about secondary prevention of coronary artery disease. This summary was endorsed by local opinion leaders and was faxed to the primary care physicians of patients with coronary artery disease. Although this fax did not lead to a significant improvement in statin prescribing, our pilot trial was small (117 patients) and enrolled patients with chronic coronary artery disease at the time they presented to their community pharmacy for medication refills. We hypothesized that this was not a “teachable moment” and that if the intervention was given at a time when the diagnosis was made, it would be more influential on the primary care physician.Thus, we designed this trial to test the impact of the opinion leader statement if it was sent to the primary physician at the time when patients were diagnosed with coronary artery disease. In addition, because local opinion leaders are not always self-evident and conducting surveys to identify them for each condition and in each locale would be time-consuming and expensive, we also evaluated the impact of an unsigned statement.  相似文献   

11.
《CMAJ》2015,187(8):E243-E252
Background:We aimed to prospectively validate a novel 1-hour algorithm using high-sensitivity cardiac troponin T measurement for early rule-out and rule-in of acute myocardial infarction (MI).Methods:In a multicentre study, we enrolled 1320 patients presenting to the emergency department with suspected acute MI. The high-sensitivity cardiac troponin T 1-hour algorithm, incorporating baseline values as well as absolute changes within the first hour, was validated against the final diagnosis. The final diagnosis was then adjudicated by 2 independent cardiologists using all available information, including coronary angiography, echocardiography, follow-up data and serial measurements of high-sensitivity cardiac troponin T levels.Results:Acute MI was the final diagnosis in 17.3% of patients. With application of the high-sensitivity cardiac troponin T 1-hour algorithm, 786 (59.5%) patients were classified as “rule-out,” 216 (16.4%) were classified as “rule-in” and 318 (24.1%) were classified to the “observational zone.” The sensitivity and the negative predictive value for acute MI in the rule-out zone were 99.6% (95% confidence interval [CI] 97.6%–99.9%) and 99.9% (95% CI 99.3%–100%), respectively. The specificity and the positive predictive value for acute MI in the rule-in zone were 95.7% (95% CI 94.3%–96.8%) and 78.2% (95% CI 72.1%–83.6%), respectively. The 1-hour algorithm provided higher negative and positive predictive values than the standard interpretation of highsensitivity cardiac troponin T using a single cut-off level (both p < 0.05). Cumulative 30-day mortality was 0.0%, 1.6% and 1.9% in patients classified in the rule-out, observational and rule-in groups, respectively (p = 0.001).Interpretation:This rapid strategy incorporating high-sensitivity cardiac troponin T baseline values and absolute changes within the first hour substantially accelerated the management of suspected acute MI by allowing safe rule-out as well as accurate rule-in of acute MI in 3 out of 4 patients. Trial registration: ClinicalTrials.gov, NCT00470587Acute myocardial infarction (MI) is a major cause of death and disability worldwide. As highly effective treatments are available, early and accurate detection of acute MI is crucial.15 Clinical assessment, 12-lead electrocardiography (ECG) and measurement of cardiac troponin levels form the pillars for the early diagnosis of acute MI in the emergency department. Major advances have recently been achieved by the development of more sensitive cardiac troponin assays.615 High-sensitivity cardiac troponin assays, which allow measurement of even low concentrations of cardiac troponin with high precision, have been shown to largely overcome the sensitivity deficit of conventional cardiac troponin assays within the first hours of presentation in the diagnosis of acute MI.615 These studies have consistently shown that the classic diagnostic interpretation of cardiac troponin as a dichotomous variable (troponin-negative and troponin-positive) no longer seems appropriate, because the positive predictive value for acute MI of being troponin-positive was only about 50%.615 The best way to interpret and clinically use high-sensitivity cardiac troponin levels in the early diagnosis of acute MI is still debated.3,5,7In a pilot study, a novel high-sensitivity cardiac troponin T 1-hour algorithm was shown to allow accurate rule-out and rule-in of acute MI within 1 hour in up to 75% of patients.11 This algorithm is based on 2 concepts. First, high-sensitivity cardiac troponin T is interpreted as a quantitative variable where the proportion of patients who have acute MI increases with increasing concentrations of cardiac troponin T.615 Second, early absolute changes in the concentrations within 1 hour provide incremental diagnostic information when added to baseline levels, with the combination acting as a reliable surrogate for late concentrations at 3 or 6 hours.615 However, many experts remained skeptical regarding the safety of the high-sensitivity cardiac troponin T 1-hour algorithm and its wider applicability.16 Accordingly, this novel diagnostic concept has not been adopted clinically to date. Because the clinical application of this algorithm would represent a profound change in clinical practice, prospective validation in a large cohort is mandatory before it can be considered for routine clinical use. The aim of this multicentre study was to prospectively validate the high-sensitivity cardiac troponin T 1-hour algorithm in a large independent cohort.  相似文献   

12.
Background:Although assessment of geriatric syndromes is increasingly encouraged in older adults, little evidence exists to support its systematic use by general practitioners (GPs). The aim of this study was to determine whether a systematic geriatric evaluation performed by GPs can prevent functional decline.Methods:We conducted a controlled, open-label, pragmatic cluster-randomized trial in 42 general practices in Switzerland. Participating GPs were expected to enrol an average of 10 community-dwelling adults (aged ≥ 75 yr) who understood French, and had visited their GP at least twice in the previous year. The intervention consisted of yearly assessment by the GP of 8 geriatric syndromes with an associated tailored management plan according to assessment results, compared with routine care. Our primary outcomes were the proportion of patients who lost at least 1 instrumental activity of daily living (ADL) and the proportion who lost at least 1 basic ADL, over 2 years. Our secondary outcomes were quality-of-life scores, measured using the older adult module of the World Health Organization Quality of Life Instrument, and health care use.Results:Forty-two GPs recruited 429 participants (63% women) with a mean age of 82.5 years (standard deviation 4.8 yr) at time of recruitment. Of these, we randomly assigned 217 participants to the intervention and 212 to the control arm. The proportion of patients who lost at least 1 instrumental ADL in the intervention and control arms during the course of the study was 43.6% and 47.6%, respectively (risk difference −4.0%, 95% confidence interval [CI] −14.9% to 6.7%, p = 0.5). The proportion of patients who lost at least 1 basic ADL was 12.4% in the intervention arm and 16.9% in the control arm (risk difference −5.1%, 95% CI −14.3% to 4.1%, p = 0.3).Interpretation:A yearly geriatric evaluation with an associated management plan, conducted systematically in GP practices, does not significantly lessen functional decline among community-dwelling, older adult patients, compared with routine care.Trial registration:ClinicalTrials.gov, NCT02618291.

The World Health Organization has defined healthy aging as the process of developing and maintaining functional ability that enables well-being in older age.1 Functional ability is often measured by an individual’s ability to perform activities of daily living (ADLs) without assistance. Geriatric syndromes, corresponding to multifactorial, chronic conditions, can impair physical and mental capacities,24 and are directly associated with functional decline.5 If recognized early, adapted preventive measures and management strategies can be started to limit functional decline.68 Interventions that have been shown to delay functional decline include comprehensive geriatric assessment, regular home visits and physical therapy.6,8,9 Comprehensive geriatric assessment consists of a “multidisciplinary diagnostic and treatment process that identifies medical, psychosocial, and functional capabilities of older adults to develop a coordinated plan to maximize overall health with ageing.”10 These assessments are usually performed by specialized geriatric teams for patients who have already been identified as frail or in the context of rehabilitation. However, most older patients see only their general practitioner (GP) and are not provided a comprehensive geriatric assessment, considering that this is a lengthy process that is often beyond the scope of a usual primary care consultation. A recent systematic review of comprehensive geriatric assessment in primary care found only 4 studies conducted in this setting,11 showing mixed effects on clinical outcomes. Only 1 study assessed functional ability, and it showed no impact in this context.12In primary care, it may be more beneficial to use shorter screening tools.1319 Previous studies using shorter tools adapted for primary care have failed to show a difference for patients compared with routine care.17,18 These interventions usually targeted patients who were already identified as frail or with a predefined number of problems.17,18,20In contrast, our Active Geriatric Evaluation (AGE) tool targets all patients aged 75 and older. This clinical tool can be easily integrated to clinical encounters in GP practices, without the need for additional organizational changes. For this study, we aimed to determine whether the AGE tool, specifically designed for GPs and consisting of a brief assessment of the most relevant geriatric syndromes combined with management plans, could slow functional decline in older patients.  相似文献   

13.

Background:

Evidence from controlled trials encourages the intake of dietary pulses (beans, chickpeas, lentils and peas) as a method of improving dyslipidemia, but heart health guidelines have stopped short of ascribing specific benefits to this type of intervention or have graded the beneficial evidence as low. We conducted a systematic review and meta-analysis of randomized controlled trials (RCTs) to assess the effect of dietary pulse intake on established therapeutic lipid targets for cardiovascular risk reduction.

Methods:

We searched electronic databases and bibliographies of selected trials for relevant articles published through Feb. 5, 2014. We included RCTs of at least 3 weeks’ duration that compared a diet emphasizing dietary pulse intake with an isocaloric diet that did not include dietary pulses. The lipid targets investigated were low-density lipoprotein (LDL) cholesterol, apolipoprotein B and non–high-density lipoprotein (non-HDL) cholesterol. We pooled data using a random-effects model.

Results:

We identified 26 RCTs (n = 1037) that satisfied the inclusion criteria. Diets emphasizing dietary pulse intake at a median dose of 130 g/d (about 1 serving daily) significantly lowered LDL cholesterol levels compared with the control diets (mean difference −0.17 mmol/L, 95% confidence interval −0.25 to −0.09 mmol/L). Treatment effects on apolipoprotein B and non-HDL cholesterol were not observed.

Interpretation:

Our findings suggest that dietary pulse intake significantly reduces LDL cholesterol levels. Trials of longer duration and higher quality are needed to verify these results. Trial registration: ClinicalTrials.gov, no. NCT01594567.Abnormal blood concentrations of lipids are one of the most important modifiable risk factors for cardiovascular disease. Although statins are effective in reducing low-density lipoprotein (LDL) cholesterol levels, major health organizations have maintained that the initial and essential approach to the prevention and management of cardiovascular disease is to modify dietary and lifestyle patterns.14Dietary non–oil-seed pulses (beans, chickpeas, lentils and peas) are foods that have received particular attention for their ability to reduce the risk of cardiovascular disease. Consumption of dietary pulses was associated with a reduction in cardiovascular disease in a large observational study5 and with improvements in LDL cholesterol levels in small trials.68 Although most guidelines on the prevention of major chronic diseases encourage the consumption of dietary pulses as part of a healthy strategy,2,3,913 none has included recommendations based on the direct benefits of lowering lipid concentrations or reducing the risk of cardiovascular disease. In all cases, the evidence on which recommendations have been based was assigned a low grade,2,3,913 and dyslipidemia guidelines do not address dietary pulse intake directly.1,4To improve the evidence on which dietary guidelines are based, we conducted a systematic review and meta-analysis of randomized controlled trials (RCTs) of the effect of dietary pulse intake on established therapeutic lipid targets for cardiovascular risk reduction. The lipid targets were LDL cholesterol, apolipoprotein B and non–high-density lipoprotein (non-HDL) cholesterol.  相似文献   

14.

Background

Little is known about the incidence and causes of heparin-induced skin lesions. The 2 most commonly reported causes of heparin-induced skin lesions are immune-mediated heparin-induced thrombocytopenia and delayed-type hypersensitivity reactions.

Methods

We prospectively examined consecutive patients who received subcutaneous heparin (most often enoxaparin or nadroparin) for the presence of heparin-induced skin lesions. If such lesions were identified, we performed a skin biopsy, platelet count measurements, and antiplatelet-factor 4 antibody and allergy testing.

Results

We enrolled 320 patients. In total, 24 patients (7.5%, 95% confidence interval [CI] 4.7%–10.6%) had heparin-induced skin lesions. Delayed-type hypersensitivity reactions were identified as the cause in all 24 patients. One patient with histopathologic evidence of delayed-type hypersensitivity tested positive for antiplatelet-factor 4 antibodies. We identified the following risk factors for heparin-induced skin lesions: a body mass index greater than 25 (odds ratio [OR] 4.6, 95% CI 1.7–15.3), duration of heparin therapy longer than 9 days (OR 5.9, 95% CI 1.9–26.3) and female sex (OR 3.0, 95% CI 1.1–8.8).

Interpretation

Heparin-induced skin lesions are relatively common, have identifiable risk factors and are commonly caused by a delayed-type hypersensitivity reaction (type IV allergic response). (ClinicalTrials.gov trial register no. NCT00510432.)Hpeparin has been used as an anticoagulant for over 60 years.1 Well-known adverse effects of heparin therapy are bleeding, osteoporosis, hair loss, and immune and nonimmune heparin-induced thrombocytopenia. The incidence of heparin-induced skin lesions is unknown, despite being increasingly reported.24 Heparin-induced skin lesions may be caused by at least 5 mechanisms: delayed-type (type IV) hypersensitivity responses,2,46 immune-mediated thrombocytopenia,3 type I allergic reactions,7,8 skin necrosis9 and pustulosis.10Heparin-induced skin lesions may indicate the presence of life-threatening heparin-induced thrombocytopenia11 — even in the absence of thrombocytopenia.3 There are no data available on the incidence of heparin-induced skin lesions or their causes. Given the rising number of reports of heparin-induced skin lesions and the importance of correctly diagnosing this condition, we sought to determine the incidence of heparin-induced skin lesions.  相似文献   

15.

Background:

Recent warnings from Health Canada regarding codeine for children have led to increased use of nonsteroidal anti-inflammatory drugs and morphine for common injuries such as fractures. Our objective was to determine whether morphine administered orally has superior efficacy to ibuprofen in fracture-related pain.

Methods:

We used a parallel group, randomized, blinded superiority design. Children who presented to the emergency department with an uncomplicated extremity fracture were randomly assigned to receive either morphine (0.5 mg/kg orally) or ibuprofen (10 mg/kg) for 24 hours after discharge. Our primary outcome was the change in pain score using the Faces Pain Scale — Revised (FPS-R). Participants were asked to record pain scores immediately before and 30 minutes after receiving each dose.

Results:

We analyzed data from 66 participants in the morphine group and 68 participants in the ibuprofen group. For both morphine and ibuprofen, we found a reduction in pain scores (mean pre–post difference ± standard deviation for dose 1: morphine 1.5 ± 1.2, ibuprofen 1.3 ± 1.0, between-group difference [δ] 0.2 [95% confidence interval (CI) −0.2 to 0.6]; dose 2: morphine 1.3 ± 1.3, ibuprofen 1.3 ± 0.9, δ 0 [95% CI −0.4 to 0.4]; dose 3: morphine 1.3 ± 1.4, ibuprofen 1.4 ± 1.1, δ −0.1 [95% CI −0.7 to 0.4]; and dose 4: morphine 1.5 ± 1.4, ibuprofen 1.1 ± 1.2, δ 0.4 [95% CI −0.2 to 1.1]). We found no significant differences in the change in pain scores between morphine and ibuprofen between groups at any of the 4 time points (p = 0.6). Participants in the morphine group had significantly more adverse effects than those in the ibuprofen group (56.1% v. 30.9%, p < 0.01).

Interpretation:

We found no significant difference in analgesic efficacy between orally administered morphine and ibuprofen. However, morphine was associated with a significantly greater number of adverse effects. Our results suggest that ibuprofen remains safe and effective for outpatient pain management in children with uncomplicated fractures. Trial registration: ClinicalTrials.gov, no. NCT01690780.There is ample evidence that analgesia is underused,1 underprescribed,2 delayed in its administration2 and suboptimally dosed 3 in clinical settings. Children are particularly susceptible to suboptimal pain management4 and are less likely to receive opioid analgesia.5 Untreated pain in childhood has been reported to lead to short-term problems such as slower healing6 and to long-term issues such as anxiety, needle phobia,7 hyperesthesia8 and fear of medical care.9 The American Academy of Pediatrics has reaffirmed its advocacy for the appropriate use of analgesia for children with acute pain.10Fractures constitute between 10% and 25% of all injuries.11 The most severe pain after an injury occurs within the first 48 hours, with more than 80% of children showing compromise in at least 1 functional area.12 Low rates of analgesia have been reported after discharge from hospital.13 A recently improved understanding of the pharmacogenomics of codeine has raised significant concerns about its safety,14,15 and has led to a Food and Drug Administration boxed warning16 and a Health Canada advisory17 against its use. Although ibuprofen has been cited as the most common agent used by caregivers to treat musculoskeletal pain,12,13 there are concerns that its use as monotherapy may lead to inadequate pain management.6,18 Evidence suggests that orally administered morphine13 and other opioids are increasingly being prescribed.19 However, evidence for the oral administration of morphine in acute pain management is limited.20,21 Thus, additional studies are needed to address this gap in knowledge and provide a scientific basis for outpatient analgesic choices in children. Our objective was to assess if orally administered morphine is superior to ibuprofen in relieving pain in children with nonoperative fractures.  相似文献   

16.

Background

The 2009 influenza A (H1N1) pandemic has required decision-makers to act in the face of substantial uncertainties. Simulation models can be used to project the effectiveness of mitigation strategies, but the choice of the best scenario may change depending on model assumptions and uncertainties.

Methods

We developed a simulation model of a pandemic (H1N1) 2009 outbreak in a structured population using demographic data from a medium-sized city in Ontario and epidemiologic influenza pandemic data. We projected the attack rate under different combinations of vaccination, school closure and antiviral drug strategies (with corresponding “trigger” conditions). To assess the impact of epidemiologic and program uncertainty, we used “combinatorial uncertainty analysis.” This permitted us to identify the general features of public health response programs that resulted in the lowest attack rates.

Results

Delays in vaccination of 30 days or more reduced the effectiveness of vaccination in lowering the attack rate. However, pre-existing immunity in 15% or more of the population kept the attack rates low, even if the whole population was not vaccinated or vaccination was delayed. School closure was effective in reducing the attack rate, especially if applied early in the outbreak, but this is not necessary if vaccine is available early or if pre-existing immunity is strong.

Interpretation

Early action, especially rapid vaccine deployment, is disproportionately effective in reducing the attack rate. This finding is particularly important given the early appearance of pandemic (H1N1) 2009 in many schools in September 2009.Jurisdictions in the northern hemisphere are bracing for a “fall wave” of pandemic (H1N1) 2009.13 Decision-makers face uncertainty, not just with respect to epidemiologic characteristics of the virus,4 but also program uncertainties related to feasibility, timeliness and effectiveness of mitigation strategies.5 Policy decisions must be made against this backdrop of uncertainty. However, the effectiveness of any mitigation strategy generally depends on the epidemiologic characteristics of the pathogen as well as the other mitigation strategies adopted. Mathematical models can project strategy effectiveness under hypothetical epidemiologic and program scenarios.612 In the case of pandemic influenza, models have been used to assess the effectiveness of school closure7 and optimal use of antiviral drug6,9,10 and vaccination strategies.8 However, model projections can be sensitive to input parameter values; thus, data uncertainty is an issue.13 Uncertainty analysis can help address the impact of uncertainties on model predictions but is often underutilized.13In this article, we present a simulation model of pandemic influenza transmission and mitigation in a population. This model projects the overall attack rate (percentage of people infected) during an outbreak. We introduce a formal method of uncertainty analysis that has not previously been applied to pandemic influenza, and we use this method to assess the impact of epidemiologic and program uncertainties. The model is intended to address the following policy questions that have been raised during the 2009 influenza pandemic: What is the impact of delayed vaccine delivery on attack rates? Can attack rates be substantially reduced without closing schools? What is the impact of pre-existing immunity from spring and summer 2009? We addressed these questions using a simulation model that projects the impact of vaccination, school closure and antiviral drug treatment strategies on attack rates.  相似文献   

17.

Background

Minimally angulated fractures of the distal radius are common in children and have excellent outcomes. We conducted a randomized controlled trial to determine whether the use of a prefabricated splint is as effective as a cast in the recovery of physical function.

Methods

We included 96 children 5 to 12 years of age who were treated for a minimally angulated (≤ 15°) greenstick or transverse fracture of the wrist between April 2007 and September 2009 at a tertiary care pediatric hospital. Participants were randomly assigned to receive either a prefabricated wrist splint or a short arm cast for four weeks. The primary outcome was physical function at six weeks, measured using the performance version of the Activities Scale for Kids. Additional outcomes included the degree of angulation, range of motion, grip strength and complications.

Results

Of the 96 children, 46 received a splint and 50 a cast. The mean Activities Scale for Kids score at six weeks was 92.8 in the splint group and 91.4 in the cast group (difference 1.44, 95% confidence interval [CI] −1.75 to 4.62). Thus, the null hypothesis that the splint is less effective by at least seven points was rejected. The between-group difference in angulation at four weeks was not statistically significant (9.85° in the splint group and 8.20° in the cast group; mean difference 1.65°, 95% CI −1.82° to 5.11°), nor was the between-group differences in range of motion, grip strength and complications.

Interpretation

In children with minimally angulated fractures of the distal radius, use of a splint was as effective as a cast with respect to the recovery of physical function. In addition, the devices were comparable in terms of the maintenance of fracture stability and the occurrence of complications. (ClinicalTrials.gov trial register no. NCT00610220.)Fractures of the distal radius are the most common fracture in childhood1 and a frequent reason for visits to the emergency department.2 Although such fractures are often angulated at the time of injury, physicians often accept those with minimal angulation (≤ 15°) because of the unique capacity of skeletally immature bones in children to heal through remodelling.35 These minimally angulated fractures generally do not require reduction, have an excellent long-term prognosis and rarely result in complications such as malunion or deformity.3,5,6The mainstay of treatment for these fractures has been the use of a short arm cast for four to six weeks and several follow-up visits to an orthopedic surgeon.3,5 However, a cast complicates hygiene for a child, and there may be risks that result from a poor fit.7 The noise from a cast saw and fear of its use, as well as discomfort of the cast are among the most common negative aspects from a child’s perspective.810 Finally, there is the need for specialized resources for application and removal of the cast. Preliminary evidence from studies involving adults11,12 and studies of stable buckle fractures of the distal radius1316 suggest that splinting offers a safe alternative. However, this approach needs to be compared with the traditional use of casting in children who have minimally angulated and potentially unstable fractures of the distal radius before it can be recommended for clinical practice.We conducted a noninferiority randomized controlled trial to determine whether a prefabricated wrist splint was as effective as routine casting in the recovery of physical function at six weeks in children who had a minimally angulated green-stick or transverse fracture of the distal radius. We also compared fracture angulation, range of motion, grip strength, complications and level of satisfaction.  相似文献   

18.
Sonja A. Swanson  Ian Colman 《CMAJ》2013,185(10):870-877

Background:

Ecological studies support the hypothesis that suicide may be “contagious” (i.e., exposure to suicide may increase the risk of suicide and related outcomes). However, this association has not been adequately assessed in prospective studies. We sought to determine the association between exposure to suicide and suicidality outcomes in Canadian youth.

Methods:

We used baseline information from the Canadian National Longitudinal Survey of Children and Youth between 1998/99 and 2006/07 with follow-up assessments 2 years later. We included all respondents aged 12–17 years in cycles 3–7 with reported measures of exposure to suicide.

Results:

We included 8766 youth aged 12–13 years, 7802 aged 14–15 years and 5496 aged 16–17 years. Exposure to a schoolmate’s suicide was associated with ideation at baseline among respondents aged 12–13 years (odds ratio [OR] 5.06, 95% confidence interval [CI] 3.04–8.40), 14–15 years (OR 2.93, 95% CI 2.02–4.24) and 16–17 years (OR 2.23, 95% CI 1.43–3.48). Such exposure was associated with attempts among respondents aged 12–13 years (OR 4.57, 95% CI 2.39–8.71), 14–15 years (OR 3.99, 95% CI 2.46–6.45) and 16–17 years (OR 3.22, 95% CI 1.62–6.41). Personally knowing someone who died by suicide was associated with suicidality outcomes for all age groups. We also assessed 2-year outcomes among respondents aged 12–15 years: a schoolmate’s suicide predicted suicide attempts among participants aged 12–13 years (OR 3.07, 95% CI 1.05–8.96) and 14–15 years (OR 2.72, 95% CI 1.47–5.04). Among those who reported a schoolmate’s suicide, personally knowing the decedent did not alter the risk of suicidality.

Interpretation:

We found that exposure to suicide predicts suicide ideation and attempts. Our results support school-wide interventions over current targeted interventions, particularly over strategies that target interventions toward children closest to the decedent.Suicidal thoughts and behaviours are prevalent13 and severe47 among adolescents. One hypothesized cause of suicidality is “suicide contagion” (i.e., exposure to suicide or related behaviours influences others to contemplate, attempt or die by suicide).8 Ecological studies support this theory: suicide and suspected suicide rates increase following a highly publicized suicide.911 However, such studies are prone to ecological fallacy and do not allow for detailed understanding of who may be most vulnerable.Adolescents may be particularly susceptible to this contagion effect. More than 13% of adolescent suicides are potentially explained by clustering;1214 clustering may explain an even larger proportion of suicide attempts.15,16 Many local,17,18 national8,19 and international20 institutions recommend school- or community-level postvention strategies in the aftermath of a suicide to help prevent further suicides and suicidality. These postvention strategies typically focus on a short interval following the death (e.g., months) with services targeted toward the most at-risk individuals (e.g., those with depression).19In this study, we assessed the association between exposure to suicide and suicidal thoughts and attempts among youth, using both cross-sectional and prospective (2-yr follow-up) analyses in a population-based cohort of Canadian youth.  相似文献   

19.

Background:

Modifiable behaviours during early childhood may provide opportunities to prevent disease processes before adverse outcomes occur. Our objective was to determine whether young children’s eating behaviours were associated with increased risk of cardiovascular disease in later life.

Methods:

In this cross-sectional study involving children aged 3–5 years recruited from 7 primary care practices in Toronto, Ontario, we assessed the relation between eating behaviours as assessed by the NutriSTEP (Nutritional Screening Tool for Every Preschooler) questionnaire (completed by parents) and serum levels of non–high-density lipoprotein (HDL) cholesterol, a surrogate marker of cardiovascular risk. We also assessed the relation between dietary intake and serum non-HDL cholesterol, and between eating behaviours and other laboratory indices of cardiovascular risk (low-density lipoprotein [LDL] cholesterol, apolipoprotein B, HDL cholesterol and apoliprotein A1).

Results:

A total of 1856 children were recruited from primary care practices in Toronto. Of these children, we included 1076 in our study for whom complete data and blood samples were available for analysis. The eating behaviours subscore of the NutriSTEP tool was significantly associated with serum non-HDL cholesterol (p = 0.03); for each unit increase in the eating behaviours subscore suggesting greater nutritional risk, we saw an increase of 0.02 mmol/L (95% confidence interval [CI] 0.002 to 0.05) in serum non-HDL cholesterol. The eating behaviours subscore was also associated with LDL cholesterol and apolipoprotein B, but not with HDL cholesterol or apolipoprotein A1. The dietary intake subscore was not associated with non-HDL cholesterol.

Interpretation:

Eating behaviours in preschool-aged children are important potentially modifiable determinants of cardiovascular risk and should be a focus for future studies of screening and behavioural interventions.Modifiable behaviours during early childhood may provide opportunities to prevent later chronic diseases, in addition to the behavioural patterns that contribute to them, before adverse outcomes occur. There is evidence that behavioural interventions during early childhood (e.g., ages 3–5 yr) can promote healthy eating.1 For example, repeated exposure to vegetables increases vegetable preference and intake,2 entertaining presentations of fruits (e.g., in the shape of a boat) increase their consumption,3 discussing internal satiety cues with young children reduces snacking,4 serving carrots before the main course (as opposed to with the main course) increases carrot consumption,5 and positive modelling of the consumption of healthy foods increases their intake by young children.6,7 Responsive eating behavioural styles in which children are given access to healthy foods and allowed to determine the timing and pace of eating in response to internal cues with limited distractions, such as those from television, have been recommended by the Institute of Medicine.8Early childhood is a critical period for assessing the origins of cardiometabolic disease and implementing preventive interventions.8 However, identifying behavioural risk factors for cardiovascular disease during early childhood is challenging, because signs of disease can take decades to appear. One emerging surrogate marker for later cardiovascular risk is the serum concentration of non–high-density lipoprotein (HDL) cholesterol (or total cholesterol minus HDL cholesterol).912 The Young Finn Longitudinal Study found an association between non-HDL cholesterol levels during childhood (ages 3–18 yr) and an adult measure of atherosclerosis (carotid artery intima–media thickness), although this relation was not significant for the subgroup of younger female children (ages 3–9 yr).10,11 The Bogalusa Heart Study, which included a subgroup of children aged 2–15 years, found an association between low-density lipoprotein (LDL) cholesterol concentration (which is highly correlated with non-HDL cholesterol) and asymptomatic atherosclerosis at autopsy.12 The American Academy of Pediatrics recommends non-HDL cholesterol concentration as the key measure for screening for cardiovascular risk in children.9 Serum non-HDL cholesterol concentration is the dyslipidemia screening test recommended by the American Academy of Pediatrics for children aged 9–11 years.9 Cardiovascular risk stratification tools such as the Reynold Risk Score (www.reynoldsriskscore.org) and the Framingham Heart Study coronary artery disease 10-year risk calculator (www.framinghamheartstudy.org/risk) for adults do not enable directed interventions when cardiovascular disease processes begin — during childhood.The primary objective of our study was to determine whether eating behaviours at 3–5 years of age, as assessed by the NutriSTEP (Nutritional Screening for Every Preschooler) questionnaire,13,14 are associated with non-HDL cholesterol levels, a surrogate marker of cardiovascular risk. Our secondary objectives were to determine whether other measures of nutritional risk, such as dietary intake, were associated with non-HDL cholesterol levels and whether eating behaviours are associated with other cardiovascular risk factors, such as LDL cholesterol, apolipoprotein B, HDL cholesterol and apoliprotein A1.  相似文献   

20.
Background:Diverse health care leadership teams may improve health care experiences and outcomes for patients. We sought to explore the race and gender of hospital and health ministry executives in Canada and compare their diversity with that of the populations they serve.Methods:This cross-sectional study included leaders of Canada’s largest hospitals and all provincial and territorial health ministries. We included individuals listed on institutional websites as part of the leadership team if a name and photo were available. Six reviewers coded and analyzed the perceived race and gender of leaders, in duplicate. We compared the proportion of racialized health care leaders with the race demographics of the general population from the 2016 Canadian Census.Results:We included 3056 leaders from 135 institutions, with reviewer concordance on gender for 3022 leaders and on race for 2946 leaders. Reviewers perceived 37 (47.4%) of 78 health ministry leaders as women, and fewer than 5 (< 7%) of 80 as racialized. In Alberta, Saskatchewan, Prince Edward Island and Nova Scotia, provinces with a centralized hospital executive team, reviewers coded 36 (50.0%) of 72 leaders as women and 5 (7.1%) of 70 as racialized. In British Columbia, New Brunswick and Newfoundland and Labrador, provinces with hospital leadership by region, reviewers perceived 120 (56.1%) of 214 leaders as women and 24 (11.5%) of 209 as racialized. In Manitoba, Ontario and Quebec, where leadership teams exist at each hospital, reviewers perceived 1326 (49.9%) of 2658 leaders as women and 243 (9.2%) of 2633 as racialized. We calculated the representation gap between racialized executives and the racialized population as 14.5% for British Columbia, 27.5% for Manitoba, 20.7% for Ontario, 12.4% for Quebec, 7.6% for New Brunswick, 7.3% for Prince Edward Island and 11.6% for Newfoundland and Labrador.Interpretation:In a study of more than 3000 health care leaders in Canada, gender parity was present, but racialized executives were substantially under-represented. This work should prompt health care institutions to increase racial diversity in leadership.

Race and gender-based disparities in health care leadership14 may negatively affect the health of marginalized patients.5,6 Diverse leadership is an integral step in establishing equitable health care institutions that serve the needs of all community members.7 Many barriers prevent racialized people, women and gender nonbinary individuals from attaining leadership positions, including reduced access to networking opportunities, 810 discrimination from patients and colleagues2,1113 and an institutional culture that views white, male leaders as most effective. 14,15 The intersectional effects of discrimination may intensify these barriers for racialized women and nonbinary people.16,17 Fundamentally, diversity and inclusion in our institutions is important on the basis of basic human rights for all people.18Health care leadership in Europe and the United States is thought to lack gender and racial diversity.1922 The degree to which these imbalances exist across Canadian health care institutions is not clear. Despite past evidence that men hold a disproportionate number of health care leadership positions in Canada,23,24 a recent study noted gender parity among leaders of provincial and territorial ministries of health.25 Among university faculty26,27 and administration, 28 racialized individuals appear to be under-represented, suggesting that a similar trend may exist in health care leadership.Race and gender can be studied in many ways.29 Perceived race is a measure of “the race that others believe you to be,” and these assessments “influence how people are treated and form the basis of racial discrimination including nondeliberate actions that nonetheless lead to socioeconomic inequities.”29 Similarly, perceived gender refers to an observer’s assumptions about a person’s gender, which can lead to differential and unfair treatment. 30 Assessing perceived race and gender provides crucial insights into the ways in which social inequalities are informed and produced.29 In this study, we sought to identify the perceived race and gender of hospital executive leaders in Canada and of nonelected leaders of the provincial and territorial health ministries. Furthermore, we wanted to analyze how the perceived racial composition of health care leadership compares with the racial composition of the population in the geographic areas that these leaders serve.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号