首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.

Background

The increased use of meta-analysis in systematic reviews of healthcare interventions has highlighted several types of bias that can arise during the completion of a randomised controlled trial. Study publication bias and outcome reporting bias have been recognised as a potential threat to the validity of meta-analysis and can make the readily available evidence unreliable for decision making.

Methodology/Principal Findings

In this update, we review and summarise the evidence from cohort studies that have assessed study publication bias or outcome reporting bias in randomised controlled trials. Twenty studies were eligible of which four were newly identified in this update. Only two followed the cohort all the way through from protocol approval to information regarding publication of outcomes. Fifteen of the studies investigated study publication bias and five investigated outcome reporting bias. Three studies have found that statistically significant outcomes had a higher odds of being fully reported compared to non-significant outcomes (range of odds ratios: 2.2 to 4.7). In comparing trial publications to protocols, we found that 40–62% of studies had at least one primary outcome that was changed, introduced, or omitted. We decided not to undertake meta-analysis due to the differences between studies.

Conclusions

This update does not change the conclusions of the review in which 16 studies were included. Direct empirical evidence for the existence of study publication bias and outcome reporting bias is shown. There is strong evidence of an association between significant results and publication; studies that report positive or significant results are more likely to be published and outcomes that are statistically significant have higher odds of being fully reported. Publications have been found to be inconsistent with their protocols. Researchers need to be aware of the problems of both types of bias and efforts should be concentrated on improving the reporting of trials.  相似文献   

2.

Background

The increased use of meta-analysis in systematic reviews of healthcare interventions has highlighted several types of bias that can arise during the completion of a randomised controlled trial. Study publication bias has been recognised as a potential threat to the validity of meta-analysis and can make the readily available evidence unreliable for decision making. Until recently, outcome reporting bias has received less attention.

Methodology/Principal Findings

We review and summarise the evidence from a series of cohort studies that have assessed study publication bias and outcome reporting bias in randomised controlled trials. Sixteen studies were eligible of which only two followed the cohort all the way through from protocol approval to information regarding publication of outcomes. Eleven of the studies investigated study publication bias and five investigated outcome reporting bias. Three studies have found that statistically significant outcomes had a higher odds of being fully reported compared to non-significant outcomes (range of odds ratios: 2.2 to 4.7). In comparing trial publications to protocols, we found that 40–62% of studies had at least one primary outcome that was changed, introduced, or omitted. We decided not to undertake meta-analysis due to the differences between studies.

Conclusions

Recent work provides direct empirical evidence for the existence of study publication bias and outcome reporting bias. There is strong evidence of an association between significant results and publication; studies that report positive or significant results are more likely to be published and outcomes that are statistically significant have higher odds of being fully reported. Publications have been found to be inconsistent with their protocols. Researchers need to be aware of the problems of both types of bias and efforts should be concentrated on improving the reporting of trials.  相似文献   

3.

Background

Most publications about selective reporting in clinical trials have focussed on outcomes. However, selective reporting of analyses for a given outcome may also affect the validity of findings. If analyses are selected on the basis of the results, reporting bias may occur. The aims of this study were to review and summarise the evidence from empirical cohort studies that assessed discrepant or selective reporting of analyses in randomised controlled trials (RCTs).

Methods and Findings

A systematic review was conducted and included cohort studies that assessed any aspect of the reporting of analyses of RCTs by comparing different trial documents, e.g., protocol compared to trial report, or different sections within a trial publication. The Cochrane Methodology Register, Medline (Ovid), PsycInfo (Ovid), and PubMed were searched on 5 February 2014. Two authors independently selected studies, performed data extraction, and assessed the methodological quality of the eligible studies. Twenty-two studies (containing 3,140 RCTs) published between 2000 and 2013 were included. Twenty-two studies reported on discrepancies between information given in different sources. Discrepancies were found in statistical analyses (eight studies), composite outcomes (one study), the handling of missing data (three studies), unadjusted versus adjusted analyses (three studies), handling of continuous data (three studies), and subgroup analyses (12 studies). Discrepancy rates varied, ranging from 7% (3/42) to 88% (7/8) in statistical analyses, 46% (36/79) to 82% (23/28) in adjusted versus unadjusted analyses, and 61% (11/18) to 100% (25/25) in subgroup analyses. This review is limited in that none of the included studies investigated the evidence for bias resulting from selective reporting of analyses. It was not possible to combine studies to provide overall summary estimates, and so the results of studies are discussed narratively.

Conclusions

Discrepancies in analyses between publications and other study documentation were common, but reasons for these discrepancies were not discussed in the trial reports. To ensure transparency, protocols and statistical analysis plans need to be published, and investigators should adhere to these or explain discrepancies. Please see later in the article for the Editors'' Summary  相似文献   

4.

Context

Since September 2005, the International Committee of Medical Journal Editors (ICMJE) has required that randomised controlled trials (RCTs) are prospectively registered in a publicly accessible database. After registration, a trial registration number (TRN) is assigned to each RCT, which should make it easier to identify future publications and cross-check published results with associated registry entries, as long as the unique identification number is reported in the article.

Objective

Our primary objective was to evaluate the reporting of trial registration numbers in biomedical publications. Secondary objectives were to evaluate how many published RCTs had been registered and how many registered RCTs had resulted in a publication, using a sample of trials from the Netherlands Trials Register (NTR).

Design, Setting

Two different samples of RCTs were examined: 1) RCTs published in November 2010 in core clinical journals identified in MEDLINE; 2) RCTs registered in the NTR with a latest expected end date of 31 August 2008.

Results

Fifty-five percent (166/302) of the reports of RCTs found in MEDLINE and 60% (186/312) of the published reports of RCTs from the NTR cohort contained a TRN. In both samples, reporting of a TRN was more likely in RCTs published in ICMJE member journals as compared to non-ICMJE member journals (MEDLINE 58% vs. 45%; NTR: 70% vs. 49%). Thirty-nine percent of published RCTs in the MEDLINE sample appear not to have been registered, and 48% of RCTs registered in the NTR seemed not to have been published at least two years after the expected date for study completion.

Conclusion

Our results show that further promotion and implementation of trial registration and accurate reporting of TRN is still needed. This might be helped by inclusion of the TRN as an item on the CONSORT checklist.  相似文献   

5.
6.

Objective

In an effort to understand how results of human clinical trials are made public, we analyze a large set of clinical trials registered at ClinicalTrials.gov, the world’s largest clinical trial registry.

Materials and Methods

We considered two trial result artifacts: (1) existence of a trial result journal article that is formally linked to a registered trial or (2) the deposition of a trial’s basic summary results within the registry.

Results

The study sample consisted of 8907 completed, interventional, phase 2-or-higher clinical trials that were completed in 2006-2009. The majority of trials (72.2%) had no structured trial-article link present. A total of 2367 trials (26.6%) deposited basic summary results within the registry. Of those , 969 trials (10.9%) were classified as trials with extended results and 1398 trials (15.7%) were classified as trials with only required basic results. The majority of the trials (54.8%) had no evidence of results, based on either linked result articles or basic summary results (silent trials), while a minimal number (9.2%) report results through both registry deposition and publication.

Discussion

Our study analyzes the body of linked knowledge around clinical trials (which we refer to as the “trialome”). Our results show that most trials do not report results and, for those that do, there is minimal overlap in the types of reporting. We identify several mechanisms by which the linkages between trials and their published results can be increased.

Conclusion

Our study shows that even when combining publications and registry results, and despite availability of several information channels, trial sponsors do not sufficiently meet the mandate to inform the public either via a linked result publication or basic results submission.  相似文献   

7.

Background

Clear, transparent and sufficiently detailed abstracts of randomized trials (RCTs), published in journal articles are important because readers will often base their initial assessment of a trial on such information. However, little is known about the quality of reporting in abstracts of RCTs published in medical journals in China.

Methods

We identified RCTs abstracts from 5 five leading Chinese medical journals published between 1998 and 2007 and indexed in MEDLINE. We assessed the quality of reporting of these abstracts based on the Consolidated Standards of Reporting Trials (CONSORT) abstract checklist. We also sought to identify whether any differences exist in reporting between the Chinese and English language version of the same abstract.

Results

We identified 332 RCT abstracts eligible for examination. Overall, the abstracts we examined reported 0–8 items as designated in the CONSORT checklist. On average, three items were reported per abstract. Details of the interventions (288/332; 87%), the number of participants randomized (216/332; 65%) and study objectives (109/332; 33%) were the top three items reported. Only two RCT abstracts reported details of trial registration, no abstracts reported the method of allocation concealment and only one mentioned specifically who was blinded. In terms of the proportion of RCT abstracts fulfilling a criterion, the absolute difference (percentage points) between the Chinese and English abstracts was 10% (ranging from 0 to 25%) on average, per item.

Conclusions

The quality of reporting in abstracts of RCTs published in Chinese medical journals needs to be improved. We hope that the introduction and endorsement of the CONSORT for Abstracts guidelines by journals reporting RCTs will lead to improvements in the quality of reporting.  相似文献   

8.
9.

Introduction

Although selective reporting of clinical trial results introduces bias into evidence based clinical decision making, publication bias in pediatric epilepsy is unknown today. Since there is a considerable ambiguity in the treatment of an important and common clinical problem, pediatric seizures, we assessed the public availability of results of phase 3 clinical trials that evaluated treatments of seizures in children and adolescents as a surrogate for the extent of publication bias in pediatric epilepsy.

Methods

We determined the proportion of published and unpublished study results of phase 3 clinical trials that were registered as completed on ClinicalTrials.gov. We searched ClinicalTrials.gov, PubMed, and Google Scholar for publications and contacted principal investigators or sponsors. The analysis was performed according to STROBE criteria.

Results

Considering studies that were completed before 2014 (N = 99), 75 (76%) pediatric phase 3 clinical trials were published but 24 (24%) remained unpublished. The unpublished studies concealed evidence from 4,437 patients. Mean time-to-publication was 25 SD ± 15.6 months, more than twice as long as mandated.

Conclusion

Ten years after the ICMJE’s clinical trials registration initiative there is still a considerable amount of selective reporting and delay of publication that potentially distorts the body of evidence in the treatment of pediatric seizures.  相似文献   

10.

Background

Acknowledgment of all serious limitations to research evidence is important for patient care and scientific progress. Formal research on how biomedical authors acknowledge limitations is scarce.

Objectives

To assess the extent to which limitations are acknowledged in biomedical publications explicitly, and implicitly by investigating the use of phrases that express uncertainty, so-called hedges; to assess the association between industry support and the extent of hedging.

Design

We analyzed reporting of limitations and use of hedges in 300 biomedical publications published in 30 high and medium -ranked journals in 2007. Hedges were assessed using linguistic software that assigned weights between 1 and 5 to each expression of uncertainty.

Results

Twenty-seven percent of publications (81/300) did not mention any limitations, while 73% acknowledged a median of 3 (range 1–8) limitations. Five percent mentioned a limitation in the abstract. After controlling for confounders, publications on industry-supported studies used significantly fewer hedges than publications not so supported (p = 0.028).

Limitations

Detection and classification of limitations was – to some extent – subjective. The weighting scheme used by the hedging detection software has subjective elements.

Conclusions

Reporting of limitations in biomedical publications is probably very incomplete. Transparent reporting of limitations may protect clinicians and guideline committees against overly confident beliefs and decisions and support scientific progress through better design, conduct or analysis of new studies.  相似文献   

11.

Background

Previous studies of drug trials submitted to regulatory authorities have documented selective reporting of both entire trials and favorable results. The objective of this study is to determine the publication rate of efficacy trials submitted to the Food and Drug Administration (FDA) in approved New Drug Applications (NDAs) and to compare the trial characteristics as reported by the FDA with those reported in publications.

Methods and Findings

This is an observational study of all efficacy trials found in approved NDAs for New Molecular Entities (NMEs) from 2001 to 2002 inclusive and all published clinical trials corresponding to the trials within the NDAs. For each trial included in the NDA, we assessed its publication status, primary outcome(s) reported and their statistical significance, and conclusions. Seventy-eight percent (128/164) of efficacy trials contained in FDA reviews of NDAs were published. In a multivariate model, trials with favorable primary outcomes (OR = 4.7, 95% confidence interval [CI] 1.33–17.1, p = 0.018) and active controls (OR = 3.4, 95% CI 1.02–11.2, p = 0.047) were more likely to be published. Forty-one primary outcomes from the NDAs were omitted from the papers. Papers included 155 outcomes that were in the NDAs, 15 additional outcomes that favored the test drug, and two other neutral or unknown additional outcomes. Excluding outcomes with unknown significance, there were 43 outcomes in the NDAs that did not favor the NDA drug. Of these, 20 (47%) were not included in the papers. The statistical significance of five of the remaining 23 outcomes (22%) changed between the NDA and the paper, with four changing to favor the test drug in the paper (p = 0.38). Excluding unknowns, 99 conclusions were provided in both NDAs and papers, nine conclusions (9%) changed from the FDA review of the NDA to the paper, and all nine did so to favor the test drug (100%, 95% CI 72%–100%, p = 0.0039).

Conclusions

Many trials were still not published 5 y after FDA approval. Discrepancies between the trial information reviewed by the FDA and information found in published trials tended to lead to more favorable presentations of the NDA drugs in the publications. Thus, the information that is readily available in the scientific literature to health care professionals is incomplete and potentially biased.  相似文献   

12.

Background

The recommended first-line therapy of chronic urticaria is second-generation antihistamines, but the modalities of treatment remains unclear. Numerous recommendations with heterogeneous conclusions have been published. We wondered whether such heterogeneous conclusions were linked to the quality of published studies and their reporting.

Objective

To review the study design and quality of reporting of randomized control trials investigating pharmacological treatment of autoimmune or idiopathic chronic urticaria.

Methodology/Principal Findings

MEDLINE and EMBASE were searched for pharmacological randomized controlled trials involving patients with chronic autoimmune or idiopathic urticaria, with the main outcome being treatment efficacy. Data were collected on general characteristics of the studies, internal validity, studied treatments, design of the trial, outcome measures and “spin” strategy in interpreting results. Spin was defined as use of specific reporting strategies to highlight that the experimental treatment is beneficial, despite statistically nonsignificant results. We evaluated 52 articles that met our criteria. Patients were reported as blinded in 42 articles (81%) and the outcome assessor was blinded in 37 (71%). A placebo was the only comparator in 13 (25%) studies. The study duration was <8 weeks in 39 articles (75%), with no follow-up after discontinuation of treatment in 37 (71%). In 4 articles (8%), blinding was clear because they described blinding of the outcome assessor, the treatment was not recognizable (identical or double-dummy) or had no major secondary effects, and computed randomization was centralized. The primary outcome was specified in 33 articles (63%) and was a score in 31. In total, 15 different scores were used. A spin strategy was used for 10 of 12 studies with a nonsignificant primary outcome.

Conclusion

For establishing guidelines in treatment of chronic urticaria, studies should focus on choosing clinically relevant and reproducible primary outcomes, long-term follow-up, limited use of placebo and avoiding spin strategies.  相似文献   

13.

Background

We explore whether the number of null results in large National Heart Lung, and Blood Institute (NHLBI) funded trials has increased over time.

Methods

We identified all large NHLBI supported RCTs between 1970 and 2012 evaluating drugs or dietary supplements for the treatment or prevention of cardiovascular disease. Trials were included if direct costs >$500,000/year, participants were adult humans, and the primary outcome was cardiovascular risk, disease or death. The 55 trials meeting these criteria were coded for whether they were published prior to or after the year 2000, whether they registered in clinicaltrials.gov prior to publication, used active or placebo comparator, and whether or not the trial had industry co-sponsorship. We tabulated whether the study reported a positive, negative, or null result on the primary outcome variable and for total mortality.

Results

17 of 30 studies (57%) published prior to 2000 showed a significant benefit of intervention on the primary outcome in comparison to only 2 among the 25 (8%) trials published after 2000 (χ2=12.2,df= 1, p=0.0005). There has been no change in the proportion of trials that compared treatment to placebo versus active comparator. Industry co-sponsorship was unrelated to the probability of reporting a significant benefit. Pre-registration in clinical trials.gov was strongly associated with the trend toward null findings.

Conclusions

The number NHLBI trials reporting positive results declined after the year 2000. Prospective declaration of outcomes in RCTs, and the adoption of transparent reporting standards, as required by clinicaltrials.gov, may have contributed to the trend toward null findings.  相似文献   

14.

Background

ClinicalTrials.gov is a publicly accessible, Internet-based registry of clinical trials managed by the US National Library of Medicine that has the potential to address selective trial publication. Our objectives were to examine completeness of registration within ClinicalTrials.gov and to determine the extent and correlates of selective publication.

Methods and Findings

We examined reporting of registration information among a cross-section of trials that had been registered at ClinicalTrials.gov after December 31, 1999 and updated as having been completed by June 8, 2007, excluding phase I trials. We then determined publication status among a random 10% subsample by searching MEDLINE using a systematic protocol, after excluding trials completed after December 31, 2005 to allow at least 2 y for publication following completion. Among the full sample of completed trials (n = 7,515), nearly 100% reported all data elements mandated by ClinicalTrials.gov, such as intervention and sponsorship. Optional data element reporting varied, with 53% reporting trial end date, 66% reporting primary outcome, and 87% reporting trial start date. Among the 10% subsample, less than half (311 of 677, 46%) of trials were published, among which 96 (31%) provided a citation within ClinicalTrials.gov of a publication describing trial results. Trials primarily sponsored by industry (40%, 144 of 357) were less likely to be published when compared with nonindustry/nongovernment sponsored trials (56%, 110 of 198; p<0.001), but there was no significant difference when compared with government sponsored trials (47%, 57 of 122; p = 0.22). Among trials that reported an end date, 75 of 123 (61%) completed prior to 2004, 50 of 96 (52%) completed during 2004, and 62 of 149 (42%) completed during 2005 were published (p = 0.006).

Conclusions

Reporting of optional data elements varied and publication rates among completed trials registered within ClinicalTrials.gov were low. Without greater attention to reporting of all data elements, the potential for ClinicalTrials.gov to address selective publication of clinical trials will be limited. Please see later in the article for the Editors'' Summary  相似文献   

15.

Background

After the publication of the CONSORT 2010 statement, few studies have been conducted to assess the reporting quality of randomized clinical trials (RCTs) on treatment of diabetes mellitus with Traditional Chinese Medicine (TCM) published in Chinese journals.

Objective

To investigate the current situation of the reporting quality of RCTs in leading medical journals in China with the CONSORT 2010 statement as criteria.

Methods

The China National Knowledge Infrastructure (CNKI) electronic database was searched for RCTs on the treatment of diabetes mellitus with TCM published in the Journal of Traditional Chinese Medicine, Chinese Journal of Integrated Traditional & Western Medicine, and the China Journal of Chinese Materia Medica from January to December 2011. We excluded trials reported as “animal studies”, “in vitro studies”, “case studies”, or “systematic reviews”. The CONSORT checklist was applied by two independent raters to evaluate the reporting quality of all eligible trials after discussing and comprehending the items thoroughly. Each item in the checklist was graded as either “yes” or “no” depending on whether it had been reported by the authors.

Results

We identified 27 RCTs. According to the 37 items in the CONSORT checklist, the average reporting percentage was 45.0%, in which the average reporting percentage for the “title and abstract”, the “introduction”, the “methods”, the “results”, the “discussion” and the “other information” was 33.3%, 88.9%, 36.4%, 54.4%, 71.6% and 14.8%, respectively. In the Journal of Traditional Chinese Medicine, Chinese Journal of Integrated Traditional & Western Medicine, and the China Journal of Chinese Materia Medica the average reporting percentage was 42.2%, 56.8%, and 46.0%, respectively.

Conclusions

The reporting quality of RCTs in these three journals was insufficient to allow readers to assess the validity of the trials. We recommend that editors require authors to use the CONSORT statement when reporting their trial results as a condition of publication.  相似文献   

16.

Background

The synthesis of published research in systematic reviews is essential when providing evidence to inform clinical and health policy decision-making. However, the validity of systematic reviews is threatened if journal publications represent a biased selection of all studies that have been conducted (dissemination bias). To investigate the extent of dissemination bias we conducted a systematic review that determined the proportion of studies published as peer-reviewed journal articles and investigated factors associated with full publication in cohorts of studies (i) approved by research ethics committees (RECs) or (ii) included in trial registries.

Methods and Findings

Four bibliographic databases were searched for methodological research projects (MRPs) without limitations for publication year, language or study location. The searches were supplemented by handsearching the references of included MRPs. We estimated the proportion of studies published using prediction intervals (PI) and a random effects meta-analysis. Pooled odds ratios (OR) were used to express associations between study characteristics and journal publication. Seventeen MRPs (23 publications) evaluated cohorts of studies approved by RECs; the proportion of published studies had a PI between 22% and 72% and the weighted pooled proportion when combining estimates would be 46.2% (95% CI 40.2%–52.4%, I2 = 94.4%). Twenty-two MRPs (22 publications) evaluated cohorts of studies included in trial registries; the PI of the proportion published ranged from 13% to 90% and the weighted pooled proportion would be 54.2% (95% CI 42.0%–65.9%, I2 = 98.9%). REC-approved studies with statistically significant results (compared with those without statistically significant results) were more likely to be published (pooled OR 2.8; 95% CI 2.2–3.5). Phase-III trials were also more likely to be published than phase II trials (pooled OR 2.0; 95% CI 1.6–2.5). The probability of publication within two years after study completion ranged from 7% to 30%.

Conclusions

A substantial part of the studies approved by RECs or included in trial registries remains unpublished. Due to the large heterogeneity a prediction of the publication probability for a future study is very uncertain. Non-publication of research is not a random process, e.g., it is associated with the direction of study findings. Our findings suggest that the dissemination of research findings is biased.  相似文献   

17.

Objective

There is a variable body of evidence on adverse bone outcomes in HIV patients co-infected with hepatitis C virus (HCV). We examined the association of HIV/HCV co-infection on osteoporosis or osteopenia (reduced bone mineral density; BMD) and fracture.

Design

Systematic review and random effects meta-analyses.

Methods

A systematic literature search was conducted for articles published in English up to 1 April 2013. All studies reporting either BMD (g/cm2, or as a T-score) or incident fractures in HIV/HCV co-infected patients compared to either HIV mono-infected or HIV/HCV uninfected/seronegative controls were included. Random effects meta-analyses estimated the pooled odds ratio (OR) and the relative risk (RR) and associated 95% confidence intervals (CI).

Results

Thirteen eligible publications (BMD N = 6; Fracture = 7) of 2,064 identified were included with a total of 427,352 subjects. No publications reported data on HCV mono-infected controls. Meta-analysis of cross-sectional studies confirmed that low bone mineral density was increasingly prevalent among co-infected patients compared to HIV mono-infected controls (pooled OR 1.98, 95% CI 1.18, 3.31) but not those uninfected (pooled OR 1.47, 95% CI 0.78, 2.78). Significant association between co-infection and fracture was found compared to HIV mono-infected from cohort and case-control studies (pooled RR 1.57, 95% CI 1.33, 1.86) and compared to HIV/HCV uninfected from cohort (pooled RR 2.46, 95% CI 1.03, 3.88) and cross-sectional studies (pooled OR 2.30, 95% CI 2.09, 2.23).

Conclusions

The associations of co-infection with prevalent low BMD and risk of fracture are confirmed in this meta-analysis. Although the mechanisms of HIV/HCV co-infection’s effect on BMD and fracture are not well understood, there is evidence to suggest that adverse outcomes among HIV/HCV co-infected patients are substantial.  相似文献   

18.

Background

The US Food and Drug Administration Amendments Act requires results from clinical trials of Food and Drug Administration–approved drugs to be posted at ClinicalTrials.gov within 1 y after trial completion. We compared the timing and completeness of results of drug trials posted at ClinicalTrials.gov and published in journals.

Methods and Findings

We searched ClinicalTrials.gov on March 27, 2012, for randomized controlled trials of drugs with posted results. For a random sample of these trials, we searched PubMed for corresponding publications. Data were extracted independently from ClinicalTrials.gov and from the published articles for trials with results both posted and published. We assessed the time to first public posting or publishing of results and compared the completeness of results posted at ClinicalTrials.gov versus published in journal articles. Completeness was defined as the reporting of all key elements, according to three experts, for the flow of participants, efficacy results, adverse events, and serious adverse events (e.g., for adverse events, reporting of the number of adverse events per arm, without restriction to statistically significant differences between arms for all randomized patients or for those who received at least one treatment dose).From the 600 trials with results posted at ClinicalTrials.gov, we randomly sampled 50% (n = 297) had no corresponding published article. For trials with both posted and published results (n = 202), the median time between primary completion date and first results publicly posted was 19 mo (first quartile = 14, third quartile = 30 mo), and the median time between primary completion date and journal publication was 21 mo (first quartile = 14, third quartile = 28 mo). Reporting was significantly more complete at ClinicalTrials.gov than in the published article for the flow of participants (64% versus 48% of trials, p<0.001), efficacy results (79% versus 69%, p = 0.02), adverse events (73% versus 45%, p<0.001), and serious adverse events (99% versus 63%, p<0.001).The main study limitation was that we considered only the publication describing the results for the primary outcomes.

Conclusions

Our results highlight the need to search ClinicalTrials.gov for both unpublished and published trials. Trial results, especially serious adverse events, are more completely reported at ClinicalTrials.gov than in the published article. Please see later in the article for the Editors'' Summary  相似文献   

19.

Importance

Poor mental health places a burden on individuals and populations. Resilient persons are able to adapt to life’s challenges and maintain high quality of life and function. Finding effective strategies to bolster resilience in individuals and populations is of interest to many stakeholders.

Objectives

To synthesize the evidence for resiliency training programs in improving mental health and capacity in 1) diverse adult populations and 2) persons with chronic diseases.

Data Sources

Electronic databases, clinical trial registries, and bibliographies. We also contacted study authors and field experts.

Study Selection

Randomized trials assessing the efficacy of any program intended to enhance resilience in adults and published after 1990. No restrictions were made based on outcome measured or comparator used.

Data Extraction and Synthesis

Reviewers worked independently and in duplicate to extract study characteristics and data. These were confirmed with authors. We conducted a random effects meta-analysis on available data and tested for interaction in planned subgroups.

Main Outcomes

The standardized mean difference (SMD) effect of resiliency training programs on 1) resilience/hardiness, 2) quality of life/well-being, 3) self-efficacy/activation, 4) depression, 5) stress, and 6) anxiety.

Results

We found 25 small trials at moderate to high risk of bias. Interventions varied in format and theoretical approach. Random effects meta-analysis showed a moderate effect of generalized stress-directed programs on enhancing resilience [pooled SMD 0.37 (95% CI 0.18, 0.57) p = .0002; I2 = 41%] within 3 months of follow up. Improvement in other outcomes was favorable to the interventions and reached statistical significance after removing two studies at high risk of bias. Trauma-induced stress-directed programs significantly improved stress [−0.53 (−1.04, −0.03) p = .03; I2 = 73%] and depression [−0.51 (−0.92, −0.10) p = .04; I2 = 61%].

Conclusions

We found evidence warranting low confidence that resiliency training programs have a small to moderate effect at improving resilience and other mental health outcomes. Further study is needed to better define the resilience construct and to design interventions specific to it.

Registration Number

PROSPERO #CRD42014007185  相似文献   

20.

Background:

Many clinical trials examine a composite outcome of admission to hospital and death, or infer a relationship between hospital admission and survival benefit. This assumes concordance of the outcomes “hospital admission” and “death.” However, whether the effects of a treatment on hospital admissions and readmissions correlate to its effect on serious outcomes such as death is unknown. We aimed to assess the correlation and concordance of effects of medical interventions on admission rates and mortality.

Methods:

We searched the Cochrane Database of Systematic Reviews from its inception to January 2012 (issue 1, 2012) for systematic reviews of treatment comparisons that included meta-analyses for both admission and mortality outcomes. For each meta-analysis, we synthesized treatment effects on admissions and death, from respective randomized trials reporting those outcomes, using random-effects models. We then measured the concordance of directions of effect sizes and the correlation of summary estimates for the 2 outcomes.

Results:

We identified 61 meta-analyses including 398 trials reporting mortality and 182 trials reporting admission rates; 125 trials reported both outcomes. In 27.9% of comparisons, the point estimates of treatment effects for the 2 outcomes were in opposite directions; in 8.2% of trials, the 95% confidence intervals did not overlap. We found no significant correlation between effect sizes for admission and death (Pearson r = 0.07, p = 0.6). Our results were similar when we limited our analysis to trials reporting both outcomes.

Interpretation:

In this metaepidemiological study, admission and mortality outcomes did not correlate, and discordances occurred in about one-third of the treatment comparisons included in our analyses. Both outcomes convey useful information and should be reported separately, but extrapolating the benefits of admission to survival is unreliable and should be avoided.Health care decisions often rely on effects of interventions described using rates of admission or readmission to hospital.1,2 These outcomes are typically regarded as indicators of insufficient quality of care and inefficient spending of health care resources;1,2 however, whether they can predict other serious clinical outcomes, such as death, is unknown.Although effects on admission or readmission rates are often analyzed using large sets of routinely collected data, such as from administrative databases and electronic health records, many randomized controlled trials (RCTs) also collect data on admission rates, and some RCTs collect mortality data. Moreover, some trials combine death and admission to hospital as the primary composite outcome3 to increase the study’s power to detect significant differences and reduce the required study size.4 However, the interpretation of such a combination is difficult when the treatment effects on the 2 components are not concordant,5 for example, when more patients survive but rates of admission increase. In such cases, composite outcomes may dilute or obscure clinically significant treatment effects on important individual components,4,6 and incomplete disclosure of individual effects may mislead the interpretation of the results.4We investigated systematic reviews of treatment comparisons that included meta-analyses of RCTs assessing effects on both rates of admission and mortality. We used the reported trial data to assess whether effects on admission rates were concordant with effects on mortality or whether it was possible to identify interventions and diseases in which these 2 outcomes would provide differing pictures of the merits of the tested interventions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号