首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
3.
Systematic reviews and meta-analyses are essential to summarize evidence relating to efficacy and safety of health care interventions accurately and reliably. The clarity and transparency of these reports, however, is not optimal. Poor reporting of systematic reviews diminishes their value to clinicians, policy makers, and other users.Since the development of the QUOROM (QUality Of Reporting Of Meta-analysis) Statement—a reporting guideline published in 1999—there have been several conceptual, methodological, and practical advances regarding the conduct and reporting of systematic reviews and meta-analyses. Also, reviews of published systematic reviews have found that key information about these studies is often poorly reported. Realizing these issues, an international group that included experienced authors and methodologists developed PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) as an evolution of the original QUOROM guideline for systematic reviews and meta-analyses of evaluations of health care interventions.The PRISMA Statement consists of a 27-item checklist and a four-phase flow diagram. The checklist includes items deemed essential for transparent reporting of a systematic review. In this Explanation and Elaboration document, we explain the meaning and rationale for each checklist item. For each item, we include an example of good reporting and, where possible, references to relevant empirical studies and methodological literature. The PRISMA Statement, this document, and the associated Web site (http://www.prisma-statement.org/) should be helpful resources to improve reporting of systematic reviews and meta-analyses.  相似文献   

4.
5.
Ma B  Guo J  Qi G  Li H  Peng J  Zhang Y  Ding Y  Yang K 《PloS one》2011,6(5):e20185

Background

Systematic reviews (SRs) of TCM have become increasingly popular in China and have been published in large numbers. This review provides the first examination of epidemiological characteristics of these SRs as well as compliance with the PRISMA and AMSTAR guidelines.

Objectives

To examine epidemiological and reporting characteristics as well as methodological quality of SRs of TCM published in Chinese journals.

Methods

Four Chinese databases were searched (CBM, CSJD, CJFD and Wanfang Database) for SRs of TCM, from inception through Dec 2009. Data were extracted into Excel spreadsheets. The PRISMA and AMSTAR checklists were used to assess reporting characteristics and methodological quality, respectively.

Results

A total of 369 SRs were identified, most (97.6%) of which used the terms systematic review or meta-analysis in the title. None of the reviews had been updated. Half (49.8%) were written by clinicians and nearly half (47.7%) were reported in specialty journals. The impact factors of 45.8% of the journals published in were zero. The most commonly treated conditions were diseases of the circulatory and digestive disease. Funding sources were not reported for any reviews. Most (68.8%) reported information about quality assessment, while less than half (43.6%) reported assessing for publication bias. Statistical mistakes appeared in one-third (29.3%) of reviews and most (91.9%) did not report on conflict of interest.

Conclusions

While many SRs of TCM interventions have been published in Chinese journals, the quality of these reviews is troubling. As a potential key source of information for clinicians and researchers, not only were many of these reviews incomplete, some contained mistakes or were misleading. Focusing on improving the quality of SRs of TCM, rather than continuing to publish them in great quantity, is urgently needed in order to increase the value of these studies.  相似文献   

6.
7.

Background

Systematic reviews (SRs) and meta-analyses (MAs) provide the highest possible level of evidence. However, poor conduct or reporting of SRs and MAs may reduce their utility. The PRISMA Statement (Preferred Reporting Items for Systematic reviews and Meta-Analyses) was developed to help authors report their SRs and MAs adequately.

Objectives

Our objectives were to (1) evaluate the quality of reporting of SRs and MAs and their abstracts in otorhinolaryngologic literature using the PRISMA and PRISMA for Abstracts checklists, respectively, (2) compare the quality of reporting of SRs and MAs published in Ear Nose Throat (ENT) journals to the quality of SRs and MAs published in the ‘gold standard’ Cochrane Database of Systematic Reviews (CDSR), and (3) formulate recommendations to improve reporting of SRs and MAs in ENT journals.

Methods

On September 3, 2014, we searched the Pubmed database using a combination of filters to retrieve SRs and MAs on otorhinolaryngologic topics published in 2012 and 2013 in the top 5 ENT journals (ISI Web of Knowledge 2013) or CDSR and relevant articles were selected. We assessed how many, and which, PRISMA (for Abstracts) items were reported adequately per journal type.

Results

We identified large differences in the reporting of individual items between the two journal types with room for improvement. In general, SRs and MAs published in ENT journals (n = 31) reported a median of 54.4% of the PRISMA items adequately, whereas the 49 articles published in the CDSR reported a median of 100.0 adequately (difference statistically significant, p < 0.001). For abstracts, medians of 41.7% for ENT journals and 75.0% for the CDSR were found (p < 0.001).

Conclusion

The reporting of SRs and MAs in ENT journals leaves room for improvement and would benefit if the PRISMA Statement were endorsed by these journals.  相似文献   

8.

Background

Research needs to be reported transparently so readers can critically assess the strengths and weaknesses of the design, conduct, and analysis of studies. Reporting guidelines have been developed to inform reporting for a variety of study designs. The objective of this study was to identify whether there is a need to develop a reporting guideline for survey research.

Methods and Findings

We conducted a three-part project: (1) a systematic review of the literature (including “Instructions to Authors” from the top five journals of 33 medical specialties and top 15 general and internal medicine journals) to identify guidance for reporting survey research; (2) a systematic review of evidence on the quality of reporting of surveys; and (3) a review of reporting of key quality criteria for survey research in 117 recently published reports of self-administered surveys. Fewer than 7% of medical journals (n = 165) provided guidance to authors on survey research despite a majority having published survey-based studies in recent years. We identified four published checklists for conducting or reporting survey research, none of which were validated. We identified eight previous reviews of survey reporting quality, which focused on issues of non-response and accessibility of questionnaires. Our own review of 117 published survey studies revealed that many items were poorly reported: few studies provided the survey or core questions (35%), reported the validity or reliability of the instrument (19%), defined the response rate (25%), discussed the representativeness of the sample (11%), or identified how missing data were handled (11%).

Conclusions

There is limited guidance and no consensus regarding the optimal reporting of survey research. The majority of key reporting criteria are poorly reported in peer-reviewed survey research articles. Our findings highlight the need for clear and consistent reporting guidelines specific to survey research. Please see later in the article for the Editors'' Summary  相似文献   

9.
10.

Background

The impact factors of biomedical journals tend to rise over time. We sought to assess the trend in the impact factor, during the past decade, of journals published on behalf of United States (US) and European scientific societies, in four select biomedical subject categories (Biology, Cell Biology, Critical Care Medicine, and Infectious Diseases).

Methods

We identified all journals included in the above-mentioned subject categories of Thomson Reuters Journal Citation Reports® for the years 1999, 2002, 2005, and 2008. We selected those that were published on behalf of US or European scientific societies, as documented in journal websites.

Results

We included 167 journals (35 in the subject category of Biology, 79 in Cell Biology, 27 in Critical Care Medicine, and 26 in Infectious Diseases). Between 1999 and 2008, the percentage increase in the impact factor of the European journals was higher than for the US journals (73.7±110.0% compared with 39.7±70.0%, p = 0.049). Regarding specific subject categories, the percentage change in the factor of the European journals tended to be higher than the respective US journals for Cell Biology (61.7% versus 16.3%), Critical Care Medicine (212.4% versus 65.4%), Infectious Diseases (88.3% versus 48.7%), whereas the opposite was observed for journals in Biology (41.0% versus 62.5%).

Conclusion

Journals published on behalf of European scientific societies, in select biomedical fields, may tend to close the “gap” in impact factor compared with those of US societies.

What''s Already Known About This Topic?

The impact factors of biomedical journals tend to rise through years. The leading positions in productivity in biomedical research are held by developed countries, including those from North America and Western Europe.

What Does This Article Add?

The journals from European biomedical scientific societies tended, over the past decade, to increase their impact factor more than the respective US journals.  相似文献   

11.

Background

Recruitment of participants into randomised controlled trials (RCTs) is critical for successful trial conduct. Although there have been two previous systematic reviews on related topics, the results (which identified specific interventions) were inconclusive and not generalizable. The aim of our study was to evaluate the relative effectiveness of recruitment strategies for participation in RCTs.

Methods and Findings

A systematic review, using the PRISMA guideline for reporting of systematic reviews, that compared methods of recruiting individual study participants into an actual or mock RCT were included. We searched MEDLINE, Embase, The Cochrane Library, and reference lists of relevant studies. From over 16,000 titles or abstracts reviewed, 396 papers were retrieved and 37 studies were included, in which 18,812 of at least 59,354 people approached agreed to participate in a clinical RCT. Recruitment strategies were broadly divided into four groups: novel trial designs (eight studies), recruiter differences (eight studies), incentives (two studies), and provision of trial information (19 studies). Strategies that increased people''s awareness of the health problem being studied (e.g., an interactive computer program [relative risk (RR) 1.48, 95% confidence interval (CI) 1.00–2.18], attendance at an education session [RR 1.14, 95% CI 1.01–1.28], addition of a health questionnaire [RR 1.37, 95% CI 1.14–1.66]), or a video about the health condition (RR 1.75, 95% CI 1.11–2.74), and also monetary incentives (RR1.39, 95% CI 1.13–1.64 to RR 1.53, 95% CI 1.28–1.84) improved recruitment. Increasing patients'' understanding of the trial process, recruiter differences, and various methods of randomisation and consent design did not show a difference in recruitment. Consent rates were also higher for nonblinded trial design, but differential loss to follow up between groups may jeopardise the study findings. The study''s main limitation was the necessity of modifying the search strategy with subsequent search updates because of changes in MEDLINE definitions. The abstracts of previous versions of this systematic review were published in 2002 and 2007.

Conclusion

Recruitment strategies that focus on increasing potential participants'' awareness of the health problem being studied, its potential impact on their health, and their engagement in the learning process appeared to increase recruitment to clinical studies. Further trials of recruitment strategies that target engaging participants to increase their awareness of the health problems being studied and the potential impact on their health may confirm this hypothesis. Please see later in the article for the Editors'' Summary  相似文献   

12.

Background

The QUOROM and PRISMA statements were published in 1999 and 2009, respectively, to improve the consistency of reporting systematic reviews (SRs)/meta-analyses (MAs) of clinical trials. However, not all SRs/MAs adhere completely to these important standards. In particular, it is not clear how well SRs/MAs of acupuncture studies adhere to reporting standards and which reporting criteria are generally ignored in these analyses.

Objectives

To evaluate reporting quality in SRs/MAs of acupuncture studies.

Methods

We performed a literature search for studies published prior to 2014 using the following public archives: PubMed, EMBASE, Web of Science, the Cochrane Database of Systematic Reviews (CDSR), the Chinese Biomedical Literature Database (CBM), the Traditional Chinese Medicine (TCM) database, the Chinese Journal Full-text Database (CJFD), the Chinese Scientific Journal Full-text Database (CSJD), and the Wanfang database. Data were extracted into pre-prepared Excel data-extraction forms. Reporting quality was assessed based on the PRISMA checklist (27 items).

Results

Of 476 appropriate SRs/MAs identified in our search, 203, 227, and 46 were published in Chinese journals, international journals, and the Cochrane Database, respectively. In 476 SRs/MAs, only 3 reported the information completely. By contrast, approximately 4.93% (1/203), 8.81% (2/227) and 0.00% (0/46) SRs/Mas reported less than 10 items in Chinese journals, international journals and CDSR, respectively. In general, the least frequently reported items (reported≤50%) in SRs/MAs were “protocol and registration”, “risk of bias across studies”, and “additional analyses” in both methods and results sections.

Conclusions

SRs/MAs of acupuncture studies have not comprehensively reported information recommended in the PRISMA statement. Our study underscores that, in addition to focusing on careful study design and performance, attention should be paid to comprehensive reporting standards in SRs/MAs on acupuncture studies.  相似文献   

13.

Background

Transparency in reporting of conflict of interest is an increasingly important aspect of publication in medical journals. Publication of large industry-supported trials may generate many citations and journal income through reprint sales and thereby be a source of conflicts of interest for journals. We investigated industry-supported trials'' influence on journal impact factors and revenue.

Methods and Findings

We sampled six major medical journals (Annals of Internal Medicine, Archives of Internal Medicine, BMJ, JAMA, The Lancet, and New England Journal of Medicine [NEJM]). For each journal, we identified randomised trials published in 1996–1997 and 2005–2006 using PubMed, and categorized the type of financial support. Using Web of Science, we investigated citations of industry-supported trials and the influence on journal impact factors over a ten-year period. We contacted journal editors and retrieved tax information on income from industry sources. The proportion of trials with sole industry support varied between journals, from 7% in BMJ to 32% in NEJM in 2005–2006. Industry-supported trials were more frequently cited than trials with other types of support, and omitting them from the impact factor calculation decreased journal impact factors. The decrease varied considerably between journals, with 1% for BMJ to 15% for NEJM in 2007. For the two journals disclosing data, income from the sales of reprints contributed to 3% and 41% of the total income for BMJ and The Lancet in 2005–2006.

Conclusions

Publication of industry-supported trials was associated with an increase in journal impact factors. Sales of reprints may provide a substantial income. We suggest that journals disclose financial information in the same way that they require them from their authors, so that readers can assess the potential effect of different types of papers on journals'' revenue and impact. Please see later in the article for the Editors'' Summary  相似文献   

14.

Introduction

Reporting guidelines (e.g. CONSORT) have been developed as tools to improve quality and reduce bias in reporting research findings. Trial registration has been recommended for countering selective publication. The International Committee of Medical Journal Editors (ICMJE) encourages the implementation of reporting guidelines and trial registration as uniform requirements (URM). For the last two decades, however, biased reporting and insufficient registration of clinical trials has been identified in several literature reviews and other investigations. No study has so far investigated the extent to which author instructions in psychiatry journals encourage following reporting guidelines and trial registration.

Method

Psychiatry Journals were identified from the 2011 Journal Citation Report. Information given in the author instructions and during the submission procedure of all journals was assessed on whether major reporting guidelines, trial registration and the ICMJE’s URM in general were mentioned and adherence recommended.

Results

We included 123 psychiatry journals (English and German language) in our analysis. A minority recommend or require 1) following the URM (21%), 2) adherence to reporting guidelines such as CONSORT, PRISMA, STROBE (23%, 7%, 4%), or 3) registration of clinical trials (34%). The subsample of the top-10 psychiatry journals (ranked by impact factor) provided much better but still improvable rates. For example, 70% of the top-10 psychiatry journals do not ask for the specific trial registration number.

Discussion

Under the assumption that better reported and better registered clinical research that does not lack substantial information will improve the understanding, credibility, and unbiased translation of clinical research findings, several stakeholders including readers (physicians, patients), authors, reviewers, and editors might benefit from improved author instructions in psychiatry journals. A first step of improvement would consist in requiring adherence to the broadly accepted reporting guidelines and to trial registration.  相似文献   

15.
16.
17.

Background

The quality of reporting in systematic reviews (SRs)/meta-analyses (MAs) of diagnostic tests published by authors in China has not been evaluated. The aims of present study are to evaluate the quality of reporting in diagnostic SRs/MAs using the PRISMA statement and determine the changes in the quality of reporting over time.

Methods

According to the inclusion and exclusion criteria, we searched five databases including Chinese Biomedical Literature Database, PubMed, EMBASE, the Cochrane Library, and Web of knowledge, to identify SRs/MAs on diagnostic tests. The searches were conducted on July 14, 2012 and the cut off for inclusion of the SRs/MAs was December 31st 2011. The PRISMA statement was used to assess the quality of reporting. Analysis was performed using Excel 2003, RevMan 5.

Results

A total of 312 studies were included. Fifteen diseases systems were covered. According to the PRISMA checklist, there had been serious reporting flaws in following items: structured summary (item 2, 22.4%), objectives (item 4, 18.9%), protocol and registration (item 5, 2.6%), risk of bias across studies (item 15, 26.3%), funding (item 27, 28.8%). The subgroup analysis showed that there had been some statistically significant improvement in total compliance for 9 PRISMA items after the PRISMA was released, 6 items were statistically improved regarding funded articles, 3 items were statistically improved for CSCD articles, and there was a statistically significant increase in the proportion of reviews reporting on 22 items for SCI articles (P<0.050).

Conclusion

The numbers of diagnostic SRs/MAs is increasing annually. The quality of reporting has measurably been improved over the previous years. Unfortunately, there are still many deficiencies in the reporting including protocol and registration, search, risk of bias across studies, and funding. Future Chinese reviewers should address issues on these aspects.  相似文献   

18.

Background

As a first step in developing a framework to evaluate and improve the quality of care of children in primary care there is a need to identify the evidence base underpinning interventions relevant to child health. Our objective was to identify all Cochrane systematic reviews relevant to the management of childhood conditions in primary care and to assess the extent to which Cochrane reviews reflect the burden of childhood illness presenting in primary care.

Methodology/Principal Findings

We used the Cochrane Child Health Field register of child-relevant systematic reviews to complete an overview of Cochrane reviews related to the management of children in primary care. We compared the proportion of systematic reviews with the proportion of consultations in Australia, US, Dutch and UK general practice in children. We identified 396 relevant systematic reviews; 358 included primary studies on children while 251 undertook a meta-analysis. Most reviews (n = 218, 55%) focused on chronic conditions and over half (n = 216, 57%) evaluated drug interventions. Since 2000, the percentage of pediatric primary care relevant reviews only increased by 2% (7% to 9%) compared to 18% (10% to 28%) in all child relevant reviews. Almost a quarter of reviews (n = 78, 23%) were published on asthma treatments which only account for 3–5% of consultations. Conversely, 15–23% of consultations are due to skin conditions yet they represent only 7% (n = 23) of reviews.

Conclusions/Significance

Although Cochrane systematic reviews focus on clinical trials and do not provide a comprehensive picture of the evidence base underpinning the management of children in primary care, the mismatch between the focus of the published research and the focus of clinical activity is striking. Clinical trials are an important component of the evidence base and the lack of trial evidence to demonstrate intervention effectiveness in substantial areas of primary care for children should be addressed.  相似文献   

19.

Background

Systematic reviews with meta-analyses often contain many statistical tests. This multiplicity may increase the risk of type I error. Few attempts have been made to address the problem of statistical multiplicity in systematic reviews. Before the implications are properly considered, the size of the issue deserves clarification. Because of the emphasis on bias evaluation and because of the editorial processes involved, Cochrane reviews may contain more multiplicity than their non-Cochrane counterparts. This study measured the quantity of statistical multiplicity present in a population of systematic reviews and aimed to assess whether this quantity is different in Cochrane and non-Cochrane reviews.

Methods/Principal Findings

We selected all the systematic reviews published by the Cochrane Anaesthesia Review Group containing a meta-analysis and matched them with comparable non-Cochrane reviews. We counted the number of statistical tests done in each systematic review. The median number of tests overall was 10 (interquartile range (IQR) 6 to 18). The median was 12 in Cochrane and 8 in non-Cochrane reviews (difference in medians 4 (95% confidence interval (CI) 2.0–19.0). The proportion that used an assessment of risk of bias as a reason for doing extra analyses was 42% in Cochrane and 28% in non-Cochrane reviews (difference in proportions 14% (95% CI −8 to 36). The issue of multiplicity was addressed in 6% of all the reviews.

Conclusion/Significance

Statistical multiplicity in systematic reviews requires attention. We found more multiplicity in Cochrane reviews than in non-Cochrane reviews. Many of the reasons for the increase in multiplicity may well represent improved methodological approaches and greater transparency, but multiplicity may also cause an increased risk of spurious conclusions. Few systematic reviews, whether Cochrane or non-Cochrane, address the issue of multiplicity.  相似文献   

20.

Background

Thousands of systematic reviews have been conducted in all areas of health care. However, the methodological quality of these reviews is variable and should routinely be appraised. AMSTAR is a measurement tool to assess systematic reviews.

Methodology

AMSTAR was used to appraise 42 reviews focusing on therapies to treat gastro-esophageal reflux disease, peptic ulcer disease, and other acid-related diseases. Two assessors applied the AMSTAR to each review. Two other assessors, plus a clinician and/or methodologist applied a global assessment to each review independently.

Conclusions

The sample of 42 reviews covered a wide range of methodological quality. The overall scores on AMSTAR ranged from 0 to 10 (out of a maximum of 11) with a mean of 4.6 (95% CI: 3.7 to 5.6) and median 4.0 (range 2.0 to 6.0). The inter-observer agreement of the individual items ranged from moderate to almost perfect agreement. Nine items scored a kappa of >0.75 (95% CI: 0.55 to 0.96). The reliability of the total AMSTAR score was excellent: kappa 0.84 (95% CI: 0.67 to 1.00) and Pearson''s R 0.96 (95% CI: 0.92 to 0.98). The overall scores for the global assessment ranged from 2 to 7 (out of a maximum score of 7) with a mean of 4.43 (95% CI: 3.6 to 5.3) and median 4.0 (range 2.25 to 5.75). The agreement was lower with a kappa of 0.63 (95% CI: 0.40 to 0.88). Construct validity was shown by AMSTAR convergence with the results of the global assessment: Pearson''s R 0.72 (95% CI: 0.53 to 0.84). For the AMSTAR total score, the limits of agreement were −0.19±1.38. This translates to a minimum detectable difference between reviews of 0.64 ‘AMSTAR points’. Further validation of AMSTAR is needed to assess its validity, reliability and perceived utility by appraisers and end users of reviews across a broader range of systematic reviews.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号