首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 653 毫秒
1.

Purpose

To reduce publication bias, systematic reviewers are advised to search conference abstracts to identify randomized controlled trials (RCTs) conducted in humans and not published in full. We assessed the information provided by authors to aid identification of RCTs for reviews.

Methods

We handsearched the Association for Research in Vision and Ophthalmology (ARVO) meeting abstracts for 2004 to 2009 to identify reports of RCTs. We compared our classification with that of authors (requested by ARVO 2004–2006), and authors’ report of trial registration (required by ARVO 2007–2009).

Results

Authors identified their study as a clinical trial for 169/191 (88%; 95% CI, 84–93) RCTs we identified for 2004, 174/212 (82%; 95% CI, 77–87) for 2005 and 162/215 (75%; 95% CI, 70–81) for 2006. Authors provided registration information for 107/172 (62%; 95% CI, 55–69) RCTs for 2007, 103/153 (67%; 95% CI, 60–75) for 2008, and 126/171 (74%; 95% CI, 67–80) for 2009. Most RCT authors providing a trial register name specified ClinicalTrials.gov (276/312; 88%; 95% CI, 85–92) and provided a valid ClinicalTrials.gov registration number (261/276; 95%; 95% CI, 92–97). Based on information provided by authors, trial registration information would be accessible for 48% (83/172) (95% CI, 41–56) of all ARVO abstracts describing RCTs in 2007, 63% (96/153) (95% CI, 55–70) in 2008, and 70% in 2009 (118/171) (95% CI, 62–76).

Conclusions

Authors of abstracts describing RCTs frequently did not classify them as clinical trials nor comply with reporting trial registration information, as required by the conference organizers. Systematic reviewers cannot rely on authors to identify relevant unpublished trials or report trial registration, if present.  相似文献   

2.
Many randomized controlled trials (RCTs) are biased and difficult to reproduce due to methodological flaws and poor reporting. There is increasing attention for responsible research practices and implementation of reporting guidelines, but whether these efforts have improved the methodological quality of RCTs (e.g., lower risk of bias) is unknown. We, therefore, mapped risk-of-bias trends over time in RCT publications in relation to journal and author characteristics. Meta-information of 176,620 RCTs published between 1966 and 2018 was extracted. The risk-of-bias probability (random sequence generation, allocation concealment, blinding of patients/personnel, and blinding of outcome assessment) was assessed using a risk-of-bias machine learning tool. This tool was simultaneously validated using 63,327 human risk-of-bias assessments obtained from 17,394 RCTs evaluated in the Cochrane Database of Systematic Reviews (CDSR). Moreover, RCT registration and CONSORT Statement reporting were assessed using automated searches. Publication characteristics included the number of authors, journal impact factor (JIF), and medical discipline. The annual number of published RCTs substantially increased over 4 decades, accompanied by increases in authors (5.2 to 7.8) and institutions (2.9 to 4.8). The risk of bias remained present in most RCTs but decreased over time for allocation concealment (63% to 51%), random sequence generation (57% to 36%), and blinding of outcome assessment (58% to 52%). Trial registration (37% to 47%) and the use of the CONSORT Statement (1% to 20%) also rapidly increased. In journals with a higher impact factor (>10), the risk of bias was consistently lower with higher levels of RCT registration and the use of the CONSORT Statement. Automated risk-of-bias predictions had accuracies above 70% for allocation concealment (70.7%), random sequence generation (72.1%), and blinding of patients/personnel (79.8%), but not for blinding of outcome assessment (62.7%). In conclusion, the likelihood of bias in RCTs has generally decreased over the last decades. This optimistic trend may be driven by increased knowledge augmented by mandatory trial registration and more stringent reporting guidelines and journal requirements. Nevertheless, relatively high probabilities of bias remain, particularly in journals with lower impact factors. This emphasizes that further improvement of RCT registration, conduct, and reporting is still urgently needed.

Many randomized controlled trials (RCTs) are biased and difficult to reproduce due to methodological flaws and poor reporting. Analysis of 176,620 RCTs published between 1966 and 2018 reveals that the risk of bias in RCTs generally decreased. Nevertheless, relatively high probabilities of bias remain, showing that further improvement of RCT registration, conduct, and reporting is still urgently needed.  相似文献   

3.
4.
Neonatal seizures are common in the neonatal intensive care unit. Clinicians treat these seizures with several anti-epileptic drugs (AEDs) to reduce seizures in a neonate. Current AEDs exhibit sub-optimal efficacy and several randomized control trials (RCT) of novel AEDs are planned. The aim of this study was to measure the influence of trial design on the required sample size of a RCT. We used seizure time courses from 41 term neonates with hypoxic ischaemic encephalopathy to build seizure treatment trial simulations. We used five outcome measures, three AED protocols, eight treatment delays from seizure onset (Td) and four levels of trial AED efficacy to simulate different RCTs. We performed power calculations for each RCT design and analysed the resultant sample size. We also assessed the rate of false positives, or placebo effect, in typical uncontrolled studies. We found that the false positive rate ranged from 5 to 85% of patients depending on RCT design. For controlled trials, the choice of outcome measure had the largest effect on sample size with median differences of 30.7 fold (IQR: 13.7–40.0) across a range of AED protocols, Td and trial AED efficacy (p<0.001). RCTs that compared the trial AED with positive controls required sample sizes with a median fold increase of 3.2 (IQR: 1.9–11.9; p<0.001). Delays in AED administration from seizure onset also increased the required sample size 2.1 fold (IQR: 1.7–2.9; p<0.001). Subgroup analysis showed that RCTs in neonates treated with hypothermia required a median fold increase in sample size of 2.6 (IQR: 2.4–3.0) compared to trials in normothermic neonates (p<0.001). These results show that RCT design has a profound influence on the required sample size. Trials that use a control group, appropriate outcome measure, and control for differences in Td between groups in analysis will be valid and minimise sample size.  相似文献   

5.

Introduction

Although selective reporting of clinical trial results introduces bias into evidence based clinical decision making, publication bias in pediatric epilepsy is unknown today. Since there is a considerable ambiguity in the treatment of an important and common clinical problem, pediatric seizures, we assessed the public availability of results of phase 3 clinical trials that evaluated treatments of seizures in children and adolescents as a surrogate for the extent of publication bias in pediatric epilepsy.

Methods

We determined the proportion of published and unpublished study results of phase 3 clinical trials that were registered as completed on ClinicalTrials.gov. We searched ClinicalTrials.gov, PubMed, and Google Scholar for publications and contacted principal investigators or sponsors. The analysis was performed according to STROBE criteria.

Results

Considering studies that were completed before 2014 (N = 99), 75 (76%) pediatric phase 3 clinical trials were published but 24 (24%) remained unpublished. The unpublished studies concealed evidence from 4,437 patients. Mean time-to-publication was 25 SD ± 15.6 months, more than twice as long as mandated.

Conclusion

Ten years after the ICMJE’s clinical trials registration initiative there is still a considerable amount of selective reporting and delay of publication that potentially distorts the body of evidence in the treatment of pediatric seizures.  相似文献   

6.

Context

Since September 2005, the International Committee of Medical Journal Editors (ICMJE) has required that randomised controlled trials (RCTs) are prospectively registered in a publicly accessible database. After registration, a trial registration number (TRN) is assigned to each RCT, which should make it easier to identify future publications and cross-check published results with associated registry entries, as long as the unique identification number is reported in the article.

Objective

Our primary objective was to evaluate the reporting of trial registration numbers in biomedical publications. Secondary objectives were to evaluate how many published RCTs had been registered and how many registered RCTs had resulted in a publication, using a sample of trials from the Netherlands Trials Register (NTR).

Design, Setting

Two different samples of RCTs were examined: 1) RCTs published in November 2010 in core clinical journals identified in MEDLINE; 2) RCTs registered in the NTR with a latest expected end date of 31 August 2008.

Results

Fifty-five percent (166/302) of the reports of RCTs found in MEDLINE and 60% (186/312) of the published reports of RCTs from the NTR cohort contained a TRN. In both samples, reporting of a TRN was more likely in RCTs published in ICMJE member journals as compared to non-ICMJE member journals (MEDLINE 58% vs. 45%; NTR: 70% vs. 49%). Thirty-nine percent of published RCTs in the MEDLINE sample appear not to have been registered, and 48% of RCTs registered in the NTR seemed not to have been published at least two years after the expected date for study completion.

Conclusion

Our results show that further promotion and implementation of trial registration and accurate reporting of TRN is still needed. This might be helped by inclusion of the TRN as an item on the CONSORT checklist.  相似文献   

7.
BackgroundA large proportion of mindfulness-based therapy trials report statistically significant results, even in the context of very low statistical power. The objective of the present study was to characterize the reporting of “positive” results in randomized controlled trials of mindfulness-based therapy. We also assessed mindfulness-based therapy trial registrations for indications of possible reporting bias and reviewed recent systematic reviews and meta-analyses to determine whether reporting biases were identified.MethodsCINAHL, Cochrane CENTRAL, EMBASE, ISI, MEDLINE, PsycInfo, and SCOPUS databases were searched for randomized controlled trials of mindfulness-based therapy. The number of positive trials was described and compared to the number that might be expected if mindfulness-based therapy were similarly effective compared to individual therapy for depression. Trial registries were searched for mindfulness-based therapy registrations. CINAHL, Cochrane CENTRAL, EMBASE, ISI, MEDLINE, PsycInfo, and SCOPUS were also searched for mindfulness-based therapy systematic reviews and meta-analyses.Results108 (87%) of 124 published trials reported ≥1 positive outcome in the abstract, and 109 (88%) concluded that mindfulness-based therapy was effective, 1.6 times greater than the expected number of positive trials based on effect size d = 0.55 (expected number positive trials = 65.7). Of 21 trial registrations, 13 (62%) remained unpublished 30 months post-trial completion. No trial registrations adequately specified a single primary outcome measure with time of assessment. None of 36 systematic reviews and meta-analyses concluded that effect estimates were overestimated due to reporting biases.ConclusionsThe proportion of mindfulness-based therapy trials with statistically significant results may overstate what would occur in practice.  相似文献   

8.
9.
Background

Management guidelines of chronic obstructive pulmonary disease (COPD) are mainly based on results of randomised controlled trials (RCTs), but some authors have suggested limited representativeness of patients included in these trials. No previous studies have applied the full range of selection criteria to a broad COPD patient population in a real-life setting.

Methods

We identified all RCTs of inhaled long-acting bronchodilator therapy, during 1999–2013, at ClinicalTrials.gov and translated trial selection criteria into definitions compatible with electronic medical records. Eligibility was calculated for each RCT by applying these criteria to a uniquely representative, well-characterised population of patients with COPD from the Optimum Patient Care Research Database (OPCRD).

Results

Median eligibility of 36 893 patients with COPD for participation in 31 RCTs was 23 % (interquartile range 12–38). Two studies of olodaterol showed the highest eligibility of 55 and 58 %. Conversely, the lowest eligibility was observed in two studies that required a history of exacerbations in the past year (3.5 and 3.9 %). For the patient subgroup with modified Medical Research Council score ≥2, the overall median eligibility was 27 %.

Conclusions

By applying an extensive range of RCT selection criteria to a large, representative COPD patient population, this study highlights that the interpretation of results from RCTs must take into account that RCT participants are variably, but generally more representative of patients in the community than previously believed.

  相似文献   

10.

Background

Clear, transparent and sufficiently detailed abstracts of randomized trials (RCTs), published in journal articles are important because readers will often base their initial assessment of a trial on such information. However, little is known about the quality of reporting in abstracts of RCTs published in medical journals in China.

Methods

We identified RCTs abstracts from 5 five leading Chinese medical journals published between 1998 and 2007 and indexed in MEDLINE. We assessed the quality of reporting of these abstracts based on the Consolidated Standards of Reporting Trials (CONSORT) abstract checklist. We also sought to identify whether any differences exist in reporting between the Chinese and English language version of the same abstract.

Results

We identified 332 RCT abstracts eligible for examination. Overall, the abstracts we examined reported 0–8 items as designated in the CONSORT checklist. On average, three items were reported per abstract. Details of the interventions (288/332; 87%), the number of participants randomized (216/332; 65%) and study objectives (109/332; 33%) were the top three items reported. Only two RCT abstracts reported details of trial registration, no abstracts reported the method of allocation concealment and only one mentioned specifically who was blinded. In terms of the proportion of RCT abstracts fulfilling a criterion, the absolute difference (percentage points) between the Chinese and English abstracts was 10% (ranging from 0 to 25%) on average, per item.

Conclusions

The quality of reporting in abstracts of RCTs published in Chinese medical journals needs to be improved. We hope that the introduction and endorsement of the CONSORT for Abstracts guidelines by journals reporting RCTs will lead to improvements in the quality of reporting.  相似文献   

11.
BackgroundValid assessment of drug efficacy and safety requires an evidence base free of reporting bias. Using trial reports in Food and Drug Administration (FDA) drug approval packages as a gold standard, we previously found that the published literature inflated the apparent efficacy of antidepressant drugs. The objective of the current study was to determine whether this has improved with recently approved drugs.Methods and findingsUsing medical and statistical reviews in FDA drug approval packages, we identified 30 Phase II/III double-blind placebo-controlled acute monotherapy trials, involving 13,747 patients, of desvenlafaxine, vilazodone, levomilnacipran, and vortioxetine; we then identified corresponding published reports. We compared the data from this newer cohort of antidepressants (approved February 2008 to September 2013) with the previously published dataset on 74 trials of 12 older antidepressants (approved December 1987 to August 2002).Using logistic regression, we examined the effects of trial outcome and trial cohort (newer versus older) on transparent reporting (whether published and FDA conclusions agreed). Among newer antidepressants, transparent publication occurred more with positive (15/15 = 100%) than negative (7/15 = 47%) trials (OR 35.1, CI95% 1.8 to 693). Controlling for trial outcome, transparent publication occurred more with newer than older trials (OR 6.6, CI95% 1.6 to 26.4). Within negative trials, transparent reporting increased from 11% to 47%.We also conducted and contrasted FDA- and journal-based meta-analyses. For newer antidepressants, FDA-based effect size (ESFDA) was 0.24 (CI95% 0.18 to 0.30), while journal-based effect size (ESJournals) was 0.29 (CI95% 0.23 to 0.36). Thus, effect size inflation, presumably due to reporting bias, was 0.05, less than for older antidepressants (0.10).Limitations of this study include a small number of trials and drugs—belonging to a single class—and a focus on efficacy (versus safety).ConclusionsReporting bias persists but appears to have diminished for newer, compared to older, antidepressants. Continued efforts are needed to further improve transparency in the scientific literature.  相似文献   

12.
13.
14.
15.

Objective

In an effort to understand how results of human clinical trials are made public, we analyze a large set of clinical trials registered at ClinicalTrials.gov, the world’s largest clinical trial registry.

Materials and Methods

We considered two trial result artifacts: (1) existence of a trial result journal article that is formally linked to a registered trial or (2) the deposition of a trial’s basic summary results within the registry.

Results

The study sample consisted of 8907 completed, interventional, phase 2-or-higher clinical trials that were completed in 2006-2009. The majority of trials (72.2%) had no structured trial-article link present. A total of 2367 trials (26.6%) deposited basic summary results within the registry. Of those , 969 trials (10.9%) were classified as trials with extended results and 1398 trials (15.7%) were classified as trials with only required basic results. The majority of the trials (54.8%) had no evidence of results, based on either linked result articles or basic summary results (silent trials), while a minimal number (9.2%) report results through both registry deposition and publication.

Discussion

Our study analyzes the body of linked knowledge around clinical trials (which we refer to as the “trialome”). Our results show that most trials do not report results and, for those that do, there is minimal overlap in the types of reporting. We identify several mechanisms by which the linkages between trials and their published results can be increased.

Conclusion

Our study shows that even when combining publications and registry results, and despite availability of several information channels, trial sponsors do not sufficiently meet the mandate to inform the public either via a linked result publication or basic results submission.  相似文献   

16.

Introduction

Few studies have assessed the nature and quality of randomized controlled trials (RCTs) in Latin America and the Caribbean (LAC).

Methods and Findings

The aims of this systematic review are to evaluate the characteristics (including the risk of bias assessment) of RCT conducted in LAC according to funding source. A review of RCTs published in 2010 in which the author''s affiliation was from LAC was performed in PubMed and LILACS. Two reviewers independently extracted data and assessed the risk of bias. The primary outcomes were risk of bias assessment and funding source. A total of 1,695 references were found in PubMed and LILACS databases, of which 526 were RCTs (N = 73.513 participants). English was the dominant publication language (93%) and most of the RCTs were published in non-LAC journals (84.2%). Only five of the 19 identified countries accounted for nearly 95% of all RCTs conducted in the region (Brazil 70.9%, Mexico 10.1%, Argentina 5.9%, Colombia 3.8%, and Chile 3.4%). Few RCTs covered priority areas related with Millennium Development Goals like maternal health (6.7%) or high priority infectious diseases (3.8%). Regarding children, 3.6% and 0.4% RCT evaluated nutrition and diarrhea interventions respectively but none pneumonia. As a comparison, aesthetic and sport related interventions account for 4.6% of all trials. A random sample of RCTs (n = 358) was assessed for funding source: exclusively public (33.8%); private (e.g. pharmaceutical company) (15.3%); other (e.g. mixed, NGO) (15.1%); no funding (35.8%). Overall assessments for risk of bias showed no statistically significant differences between RCTs and type of funding source. Statistically significant differences favoring private and others type of funding was found when assessing trial registration and conflict of interest reporting.

Conclusion

Findings of this study could be used to provide more direction for future research to facilitate innovation, improve health outcomes or address priority health problems.  相似文献   

17.
18.

Background

Confidence that randomized controlled trial (RCT) results accurately reflect intervention effectiveness depends on proper trial conduct and the accuracy and completeness of published trial reports. The Journal of Consulting and Clinical Psychology (JCCP) is the primary trials journal amongst American Psychological Association (APA) journals. The objectives of this study were to review RCTs recently published in JCCP to evaluate (1) adequacy of primary outcome analysis definitions; (2) registration status; and, (3) among registered trials, adequacy of outcome registrations. Additionally, we compared results from JCCP to findings from a recent study of top psychosomatic and behavioral medicine journals.

Methods

Eligible RCTs were published in JCCP in 2013–2014. For each RCT, two investigators independently extracted data on (1) adequacy of outcome analysis definitions in the published report, (2) whether the RCT was registered prior to enrolling patients, and (3) adequacy of outcome registration.

Results

Of 70 RCTs reviewed, 12 (17.1%) adequately defined primary or secondary outcome analyses, whereas 58 (82.3%) had multiple primary outcome analyses without statistical adjustment or undefined outcome analyses. There were 39 (55.7%) registered trials. Only two trials registered prior to patient enrollment with a single primary outcome variable and time point of assessment. However, in one of the two trials, registered and published outcomes were discrepant. No studies were adequately registered as per Standard Protocol Items: Recommendation for Interventional Trials guidelines. Compared to psychosomatic and behavioral medicine journals, the proportion of published trials with adequate outcome analysis declarations was significantly lower in JCCP (17.1% versus 32.9%; p = 0.029). The proportion of registered trials in JCCP (55.7%) was comparable to behavioral medicine journals (52.6%; p = 0.709).

Conclusions

The quality of published outcome analysis definitions and trial registrations in JCCP is suboptimal. Greater attention to proper trial registration and outcome analysis definition in published reports is needed.  相似文献   

19.
20.

Background

The reporting of outcomes within published randomized trials has previously been shown to be incomplete, biased and inconsistent with study protocols. We sought to determine whether outcome reporting bias would be present in a cohort of government-funded trials subjected to rigorous peer review.

Methods

We compared protocols for randomized trials approved for funding by the Canadian Institutes of Health Research (formerly the Medical Research Council of Canada) from 1990 to 1998 with subsequent reports of the trials identified in journal publications. Characteristics of reported and unreported outcomes were recorded from the protocols and publications. Incompletely reported outcomes were defined as those with insufficient data provided in publications for inclusion in meta-analyses. An overall odds ratio measuring the association between completeness of reporting and statistical significance was calculated stratified by trial. Finally, primary outcomes specified in trial protocols were compared with those reported in publications.

Results

We identified 48 trials with 68 publications and 1402 outcomes. The median number of participants per trial was 299, and 44% of the trials were published in general medical journals. A median of 31% (10th–90th percentile range 5%–67%) of outcomes measured to assess the efficacy of an intervention (efficacy outcomes) and 59% (0%–100%) of those measured to assess the harm of an intervention (harm outcomes) per trial were incompletely reported. Statistically significant efficacy outcomes had a higher odds than nonsignificant efficacy outcomes of being fully reported (odds ratio 2.7; 95% confidence interval 1.5–5.0). Primary outcomes differed between protocols and publications for 40% of the trials.

Interpretation

Selective reporting of outcomes frequently occurs in publications of high-quality government-funded trials.Selective reporting of results from randomized trials can occur either at the level of end points within published studies (outcome reporting bias)1 or at the level of entire trials that are selectively published (study publication bias).2 Outcome reporting bias has previously been demonstrated in a broad cohort of published trials approved by a regional ethics committee.1 The Canadian Institutes of Health Research (CIHR) — the primary federal funding agency, known before 2000 as the Medical Research Council of Canada (MRC) — recognized the need to address this issue and conducted an internal review process in 2002 to evaluate the reporting of results from its funded trials. The primary objectives were to determine (a) the prevalence of incomplete outcome reporting in journal publications of randomized trials; (b) the degree of association between adequate outcome reporting and statistical significance; and (c) the consistency between primary outcomes specified in trial protocols and those specified in subsequent journal publications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号