首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   28篇
  免费   0篇
  2016年   3篇
  2015年   3篇
  2014年   3篇
  2013年   1篇
  2012年   7篇
  2011年   4篇
  2010年   2篇
  2009年   2篇
  2004年   1篇
  2003年   1篇
  1990年   1篇
排序方式: 共有28条查询结果,搜索用时 15 毫秒
21.
When the individual outcomes within a composite outcome appear to have different treatment effects, either in magnitude or direction, researchers may question the validity or appropriateness of using this composite outcome as a basis for measuring overall treatment effect in a randomized controlled trial. The question remains as to how to distinguish random variation in estimated treatment effects from important heterogeneity within a composite outcome. This paper suggests there may be some utility in directly testing the assumption of homogeneity of treatment effect across the individual outcomes within a composite outcome. We describe a treatment heterogeneity test for composite outcomes based on a class of models used for the analysis of correlated data arising from the measurement of multiple outcomes for the same individuals. Such a test may be useful in planning a trial with a primary composite outcome and at trial end with final analysis and presentation. We demonstrate how to determine the statistical power to detect composite outcome treatment heterogeneity using the POISE Trial data. Then we describe how this test may be incorporated into a presentation of trial results with composite outcomes. We conclude that it may be informative for trialists to assess the consistency of treatment effects across the individual outcomes within a composite outcome using a formalized methodology and the suggested test represents one option.  相似文献   
22.

Background:

Meta-analyses of continuous outcomes typically provide enough information for decision-makers to evaluate the extent to which chance can explain apparent differences between interventions. The interpretation of the magnitude of these differences — from trivial to large — can, however, be challenging. We investigated clinicians’ understanding and perceptions of usefulness of 6 statistical formats for presenting continuous outcomes from meta-analyses (standardized mean difference, minimal important difference units, mean difference in natural units, ratio of means, relative risk and risk difference).

Methods:

We invited 610 staff and trainees in internal medicine and family medicine programs in 8 countries to participate. Paper-based, self-administered questionnaires presented summary estimates of hypothetical interventions versus placebo for chronic pain. The estimates showed either a small or a large effect for each of the 6 statistical formats for presenting continuous outcomes. Questions addressed participants’ understanding of the magnitude of treatment effects and their perception of the usefulness of the presentation format. We randomly assigned participants 1 of 4 versions of the questionnaire, each with a different effect size (large or small) and presentation order for the 6 formats (1 to 6, or 6 to 1).

Results:

Overall, 531 (87.0%) of the clinicians responded. Respondents best understood risk difference, followed by relative risk and ratio of means. Similarly, they perceived the dichotomous presentation of continuous outcomes (relative risk and risk difference) to be most useful. Presenting results as a standardized mean difference, the longest standing and most widely used approach, was poorly understood and perceived as least useful.

Interpretation:

None of the presentation formats were well understood or perceived as extremely useful. Clinicians best understood the dichotomous presentations of continuous outcomes and perceived them to be the most useful. Further initiatives to help clinicians better grasp the magnitude of the treatment effect are needed.Health professionals increasingly rely on summary estimates from systematic reviews and meta-analyses to guide their clinical decisions and to provide information for shared decision-making. Meta-analyses of clinical trials typically provide the information necessary for decision-makers to evaluate the extent to which chance can explain apparent intervention effects (i.e., statistical significance). However, interpreting the magnitude of the treatment effect — from trivial to large — particularly for continuous outcome measures, can be challenging.Such challenges include decision-makers’ unfamiliarity with the instruments used to measure the outcome. For instance, without further information, clinicians may have difficulty grasping the importance of a 5-point difference on the Short-Form Health Survey-36 (SF-36) or a 1-point difference on a visual analogue scale for pain.1 Second, trials often use different instruments to measure the same construct. For instance, investigators may measure physical function among patients with arthritis using 1 of 5 instruments (the Western Ontario and McMaster Universities Arthritis Index using either a visual analogue or Likert scale; the Arthritis Impact Measurement Scale; the SF-36 Physical Function; or the Lequesne index).2,3Authors have several options for pooling results of continuous outcomes. When all trials have used the same instrument to measure outcomes such as physical function or pain, the most straightforward method is to present the mean difference in natural units between the intervention and control groups. When trialists have used different instruments to measure the same construct, authors of systematic reviews typically report differences between intervention and control groups in standard deviation units, an approach known as the standardized mean difference (SMD). This approach involves dividing the mean difference in each trial by the pooled standard deviation for that trial’s outcome.4For meta-analyses of outcomes measured using different instruments, presenting results as an SMD is the longest standing and most widely used approach and is recommended in the Cochrane handbook for systematic reviews of interventions.4 Limitations of this approach include, however, statistical bias toward decreased treatment effects,5,6 the possibility that decision-makers will find the measure difficult to interpret7,8 and the possibility that the same treatment effect will appear different depending on whether the study population had similar results in the measure of interest (i.e., if homogeneous, a small standard deviation) or varied greatly in the measure of interest (i.e., if heterogeneous, a large standard deviation).9,10Several research groups have proposed alternative statistical formats for presenting continuous outcomes from meta-analyses that they postulate clinicians will more easily interpret.68,1116 The Grading of Recommendations Assessment, Development and Evaluation (GRADE) Working Group recently provided an overview of methods for presenting pooled continuous data.9,10 These alternatives (Appendix 1, available at www.cmaj.ca/lookup/suppl/doi:10.1503/cmaj.150430/-/DC1), although intuitively compelling, have seen limited use.We conducted a survey to determine clinicians’ understanding of the magnitude of treatment effect for 6 approaches to the presentation of continuous outcomes from meta-analyses, as well as their perceptions of the usefulness of each approach for clinical decision-making. We also evaluated whether their understanding and perceptions of usefulness were influenced by country, medical specialty, clinical experience or training in health research methodology.  相似文献   
23.

Objectives

To compare the predictive accuracy of the frailty index (FI) of deficit accumulation and the phenotypic frailty (PF) model in predicting risks of future falls, fractures and death in women aged ≥55 years.

Methods

Based on the data from the Global Longitudinal Study of Osteoporosis in Women (GLOW) 3-year Hamilton cohort (n = 3,985), we compared the predictive accuracy of the FI and PF in risks of falls, fractures and death using three strategies: (1) investigated the relationship with adverse health outcomes by increasing per one-fifth (i.e., 20%) of the FI and PF; (2) trichotomized the FI based on the overlap in the density distribution of the FI by the three groups (robust, pre-frail and frail) which were defined by the PF; (3) categorized the women according to a predicted probability function of falls during the third year of follow-up predicted by the FI. Logistic regression models were used for falls and death, while survival analyses were conducted for fractures.

Results

The FI and PF agreed with each other at a good level of consensus (correlation coefficients ≥ 0.56) in all the three strategies. Both the FI and PF approaches predicted adverse health outcomes significantly. The FI quantified the risks of future falls, fractures and death more precisely than the PF. Both the FI and PF discriminated risks of adverse outcomes in multivariable models with acceptable and comparable area under the curve (AUCs) for falls (AUCs ≥ 0.68) and death (AUCs ≥ 0.79), and c-indices for fractures (c-indices ≥ 0.69) respectively.

Conclusions

The FI is comparable with the PF in predicting risks of adverse health outcomes. These findings may indicate the flexibility in the choice of frailty model for the elderly in the population-based settings.  相似文献   
24.
25.
26.
Background:Remote ischemic preconditioning is a simple therapy that may reduce cardiac and kidney injury. We undertook a randomized controlled trial to evaluate the effect of this therapy on markers of heart and kidney injury after cardiac surgery.Methods:Patients at high risk of death within 30 days after cardiac surgery were randomly assigned to undergo remote ischemic preconditioning or a sham procedure after induction of anesthesia. The preconditioning therapy was three 5-minute cycles of thigh ischemia, with 5 minutes of reperfusion between cycles. The sham procedure was identical except that ischemia was not induced. The primary outcome was peak creatine kinase–myocardial band (CK-MB) within 24 hours after surgery (expressed as multiples of the upper limit of normal, with log transformation). The secondary outcome was change in creatinine level within 4 days after surgery (expressed as log-transformed micromoles per litre). Patient-important outcomes were assessed up to 6 months after randomization.Results:We randomly assigned 128 patients to remote ischemic preconditioning and 130 to the sham therapy. There were no significant differences in postoperative CK-MB (absolute mean difference 0.15, 95% confidence interval [CI] −0.07 to 0.36) or creatinine (absolute mean difference 0.06, 95% CI −0.10 to 0.23). Other outcomes did not differ significantly for remote ischemic preconditioning relative to the sham therapy: for myocardial infarction, relative risk (RR) 1.35 (95% CI 0.85 to 2.17); for acute kidney injury, RR 1.10 (95% CI 0.68 to 1.78); for stroke, RR 1.02 (95% CI 0.34 to 3.07); and for death, RR 1.47 (95% CI 0.65 to 3.31).Interpretation:Remote ischemic precnditioning did not reduce myocardial or kidney injury during cardiac surgery. This type of therapy is unlikely to substantially improve patient-important outcomes in cardiac surgery. Trial registration: ClinicalTrials.gov, no. NCT01071265.Each year, 2 million patients worldwide undergo cardiac surgery. For more than 25% of these patients, the surgery is complicated by myocardial infarction (MI) and/or acute kidney injury, both of which are strongly associated with morbidity and mortality.13 Preventing MI and acute kidney injury after cardiac surgery would improve survival.An important cause of MI and acute kidney injury in patients undergoing cardiac surgery is ischemia–reperfusion injury.4,5 This type of injury begins as ischemia, which is then exacerbated by a systemic inflammatory response upon restoration of organ perfusion.6 Remote ischemic preconditioning may mitigate ischemia–reperfusion damage. It is accomplished by inducing, before surgery, brief episodes of ischemia in a limb, which lead to widespread activation of endogenous cellular systems that may protect organs from subsequent severe ischemia and reperfusion.79Small randomized controlled trials evaluating the efficacy of remote ischemic preconditioning have had mixed results.1017 Interpretation of their data is difficult because of small sample sizes and heterogeneity in the preconditioning procedures and patient populations (e.g., few trials have evaluated patients at high risk of organ injury and postoperative death). Whether remote ischemic preconditioning effectively mitigates ischemia–reperfusion injury therefore remains uncertain. We undertook the Remote Ischemic Preconditioning in Cardiac Surgery Trial (Remote IMPACT) to determine whether this procedure reduces myocardial and kidney injury. We proposed that a large trial to determine the effect on clinically important outcomes would be worthwhile only if a substantial effect on myocardial or kidney injury, or both, were observed in the current study.  相似文献   
27.
In this paper, I present a Bayesian approach to estimation of the number needed to treat (NNT). The use of NNT as a measure of clinical benefit is now becoming commonplace. Various methods of estimation have been proposed, but none of them seem to provide entirely good estimates. Very little has been done to understand the statistical properties of NNT. Here, I derive the posterior distribution of NNT and use simulations to investigate the general behaviour of the distribution. The posterior mode of the distribution is proposed as a point estimate and results are compared with the conventional method of estimation of NNT done by inversion.  相似文献   
28.
Conventional methods for sample size calculation for population-based longitudinal studies tend to overestimate the statistical power by overlooking important determinants of the required sample size, such as the measurement errors and unmeasured etiological determinants, etc. In contrast, a simulation-based sample size calculation, if designed properly, allows these determinants to be taken into account and offers flexibility in accommodating complex study design features. The Canadian Longitudinal Study on Aging (CLSA) is a Canada-wide, 20-year follow-up study of 30,000 people between the ages of 45 and 85 years, with in-depth information collected every 3 years. A simulation study, based on an illness-death model, was conducted to: (1) investigate the statistical power profile of the CLSA to detect the effect of environmental and genetic risk factors, and their interaction on age-related chronic diseases; and (2) explore the design alternatives and implementation strategies for increasing the statistical power of population-based longitudinal studies in general. The results showed that the statistical power to identify the effect of environmental and genetic risk exposures, and their interaction on a disease was boosted when: (1) the prevalence of the risk exposures increased; (2) the disease of interest is relatively common in the population; and (3) risk exposures were measured accurately. In addition, the frequency of data collection every three years in the CLSA led to a slightly lower statistical power compared to the design assuming that participants underwent health monitoring continuously. The CLSA had sufficient power to detect a small (1<hazard ratio (HR)≤1.5) or moderate effect (1.5< HR≤2.0) of the environmental risk exposure, as long as the risk exposure and the disease of interest were not rare. It had enough power to detect a moderate or large (2.0<HR≤3.0) effect of the genetic risk exposure when the prevalence of the risk exposure was not very low (≥0.1) and the disease of interest was not rare (such as diabetes and dementia). The CLSA had enough power to detect a large effect of the gene-environment interaction only when both risk exposures had relatively high prevalence (0.2) and the disease of interest was very common (such as diabetes). The minimum detectable hazard ratios (MDHR) of the CLSA for the environmental and genetic risk exposures obtained from this simulation study were larger than those calculated according to the conventional sample size calculation method. For example, the MDHR for the environmental risk exposure was 1.15 according to the conventional method if the prevalence of the risk exposure was 0.1 and the disease of interest was dementia. In contrast, the MDHR was 1.61 if the same exposure was measured every 3 years with a misclassification rate of 0.1 according to this simulation study. With a given sample size, higher statistical power could be achieved by increasing the measuring frequency in participants with high risk of declining health status or changing risk exposures, and by increasing measurement accuracy of diseases and risk exposures. A properly designed simulation-based sample size calculation is superior to conventional methods when rigorous sample size calculation is necessary.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号