首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
There is growing concern that poor experimental design and lack of transparent reporting contribute to the frequent failure of pre-clinical animal studies to translate into treatments for human disease. In 2010, the Animal Research: Reporting of In Vivo Experiments (ARRIVE) guidelines were introduced to help improve reporting standards. They were published in PLOS Biology and endorsed by funding agencies and publishers and their journals, including PLOS, Nature research journals, and other top-tier journals. Yet our analysis of papers published in PLOS and Nature journals indicates that there has been very little improvement in reporting standards since then. This suggests that authors, referees, and editors generally are ignoring guidelines, and the editorial endorsement is yet to be effectively implemented.  相似文献   

2.
3.
ObjectiveTo improve the accuracy and completeness of reporting of studies of diagnostic accuracy, to allow readers to assess the potential for bias in a study, and to evaluate a study''s generalisability.MethodsThe Standards for Reporting of Diagnostic Accuracy (STARD) steering committee searched the literature to identify publications on the appropriate conduct and reporting of diagnostic studies and extracted potential items into an extensive list. Researchers, editors, and members of professional organisations shortened this list during a two day consensus meeting, with the goal of developing a checklist and a generic flow diagram for studies of diagnostic accuracy.ResultsThe search for published guidelines about diagnostic research yielded 33 previously published checklists, from which we extracted a list of 75 potential items. At the consensus meeting, participants shortened the list to a 25 item checklist, by using evidence, whenever available. A prototype of a flow diagram provides information about the method of patient recruitment, the order of test execution, and the numbers of patients undergoing the test under evaluation and the reference standard, or both.ConclusionsEvaluation of research depends on complete and accurate reporting. If medical journals adopt the STARD checklist and flow diagram, the quality of reporting of studies of diagnostic accuracy should improve to the advantage of clinicians, researchers, reviewers, journals, and the public.The Standards for Reporting of Diagnostic Accuracy (STARD) steering group aims to improve the accuracy and completeness of reporting of studies of diagnostic accuracy. The group describes and explains the development of a checklist and flow diagram for authors of reports  相似文献   

4.
5.
Objectives: We investigated how often journal articles reporting on human HIV research in four developing world countries mention any institutional review boards (IRBs) or research ethics committees (RECs), and what factors are involved. Methods: We examined all such articles published in 2007 from India, Nigeria, Thailand and Uganda, and coded these for several ethical and other characteristics. Results: Of 221 articles meeting inclusion criteria, 32.1% did not mention IRB approval. Mention of IRB approval was associated with: biomedical (versus psychosocial) research (P = 0.001), more sponsor‐country authors (P = 0.003), sponsor‐country corresponding author (P = 0.047), mention of funding (P < 0.001), particular host‐country involved (P = 0.002), journals having sponsor‐country editors (P < 0.001), and journal stated compliance with International Committee of Medical Journal Editors (ICMJE) guidelines (P = 0.003). Logistic regression identified 3 significant factors: mention of funding, journal having sponsor‐country editors and research being biomedical. Conclusions: One‐third of articles still do not mention IRB approval. Mention varied by country, and was associated with biomedical research, and more sponsor country involvement. Recently, some journals have required mention of IRB approval, but allow authors to do so in cover letters to editors, not in the article itself. Instead, these data suggest, journals should require that articles document adherence to ethical standards.  相似文献   

6.
7.
8.
《PLoS medicine》2021,18(10)
BackgroundThe importance of infectious disease epidemic forecasting and prediction research is underscored by decades of communicable disease outbreaks, including COVID-19. Unlike other fields of medical research, such as clinical trials and systematic reviews, no reporting guidelines exist for reporting epidemic forecasting and prediction research despite their utility. We therefore developed the EPIFORGE checklist, a guideline for standardized reporting of epidemic forecasting research.Methods and findingsWe developed this checklist using a best-practice process for development of reporting guidelines, involving a Delphi process and broad consultation with an international panel of infectious disease modelers and model end users. The objectives of these guidelines are to improve the consistency, reproducibility, comparability, and quality of epidemic forecasting reporting. The guidelines are not designed to advise scientists on how to perform epidemic forecasting and prediction research, but rather to serve as a standard for reporting critical methodological details of such studies.ConclusionsThese guidelines have been submitted to the EQUATOR network, in addition to hosting by other dedicated webpages to facilitate feedback and journal endorsement.

Simon Pollett and co-workers describe EPIFORGE, a guideline for reporting research on epidemic forecasting.  相似文献   

9.

Introduction

Reporting guidelines (e.g. CONSORT) have been developed as tools to improve quality and reduce bias in reporting research findings. Trial registration has been recommended for countering selective publication. The International Committee of Medical Journal Editors (ICMJE) encourages the implementation of reporting guidelines and trial registration as uniform requirements (URM). For the last two decades, however, biased reporting and insufficient registration of clinical trials has been identified in several literature reviews and other investigations. No study has so far investigated the extent to which author instructions in psychiatry journals encourage following reporting guidelines and trial registration.

Method

Psychiatry Journals were identified from the 2011 Journal Citation Report. Information given in the author instructions and during the submission procedure of all journals was assessed on whether major reporting guidelines, trial registration and the ICMJE’s URM in general were mentioned and adherence recommended.

Results

We included 123 psychiatry journals (English and German language) in our analysis. A minority recommend or require 1) following the URM (21%), 2) adherence to reporting guidelines such as CONSORT, PRISMA, STROBE (23%, 7%, 4%), or 3) registration of clinical trials (34%). The subsample of the top-10 psychiatry journals (ranked by impact factor) provided much better but still improvable rates. For example, 70% of the top-10 psychiatry journals do not ask for the specific trial registration number.

Discussion

Under the assumption that better reported and better registered clinical research that does not lack substantial information will improve the understanding, credibility, and unbiased translation of clinical research findings, several stakeholders including readers (physicians, patients), authors, reviewers, and editors might benefit from improved author instructions in psychiatry journals. A first step of improvement would consist in requiring adherence to the broadly accepted reporting guidelines and to trial registration.  相似文献   

10.
In this study we assess the applicability of a set of reliability criteria proposed by Ågerstrand et al. This was done by evaluating the reliability of 12 non-standard peer-reviewed ecotoxicity and toxicity studies for Bisphenol A. There was an overall agreement between the evaluator and the authors of the papers regarding the result of the evaluations. This suggests that the criteria offer enough guidance to be a useful and consistent evaluation tool. It provides a transparent and structured approach, and ensures that a minimum and similar set of criteria is used. The evaluation of the peer-reviewed ecotoxicity and toxicity studies concludes that important information is sometimes missing, and therefore the studies do not always meet common regulatory requirements regarding reporting. Whether this is due to insufficient reporting or due to poorly performed studies is not known. To improve the reporting, and thereby promote reliability and reproducibility, researchers, reviewers, and editors are recommended to use the suggested criteria as a guideline. In conclusion, in order to improve the reliability of peer-reviewed studies, and to increase their use in regulatory risk assessments of chemicals, the dialog between regulators, researchers, and editors regarding how to evaluate and report studies needs to be strengthened.  相似文献   

11.
Despite much discussion of the importance of quantifying and reporting genotyping error in molecular studies, it is still not standard practice in the literature. This is particularly a concern for amplified fragment length polymorphism (AFLP) studies, where differences in laboratory, peak‐calling and locus‐selection protocols can generate data sets varying widely in genotyping error rate, the number of loci used and potentially estimates of genetic diversity or differentiation. In our experience, papers rarely provide adequate information on AFLP reproducibility, making meaningful comparisons among studies difficult. To quantify the extent of this problem, we reviewed the current molecular ecology literature (470 recent AFLP articles) to determine the proportion of studies that report an error rate and follow established guidelines for assessing error. Fifty‐four per cent of recent articles do not report any assessment of data set reproducibility. Of those studies that do claim to have assessed reproducibility, the majority (~90%) either do not report a specific error rate or do not provide sufficient details to allow the reader to judge whether error was assessed correctly. Even of the papers that do report an error rate and provide details, many (≥23%) do not follow recommended standards for quantifying error. These issues also exist for other marker types such as microsatellites, and next‐generation sequencing techniques, particularly those which use restriction enzymes for fragment generation. Therefore, we urge all researchers conducting genotyping studies to estimate and more transparently report genotyping error using existing guidelines and encourage journals to enforce stricter standards for the publication of genotyping studies.  相似文献   

12.
Routinely collected health data, obtained for administrative and clinical purposes without specific a priori research goals, are increasingly used for research. The rapid evolution and availability of these data have revealed issues not addressed by existing reporting guidelines, such as Strengthening the Reporting of Observational Studies in Epidemiology (STROBE). The REporting of studies Conducted using Observational Routinely collected health Data (RECORD) statement was created to fill these gaps. RECORD was created as an extension to the STROBE statement to address reporting items specific to observational studies using routinely collected health data. RECORD consists of a checklist of 13 items related to the title, abstract, introduction, methods, results, and discussion section of articles, and other information required for inclusion in such research reports. This document contains the checklist and explanatory and elaboration information to enhance the use of the checklist. Examples of good reporting for each RECORD checklist item are also included herein. This document, as well as the accompanying website and message board (http://www.record-statement.org), will enhance the implementation and understanding of RECORD. Through implementation of RECORD, authors, journals editors, and peer reviewers can encourage transparency of research reporting.  相似文献   

13.
Accurate and complete reporting of study methods, results and interpretation are essential components for any scientific process, allowing end-users to evaluate the internal and external validity of a study. When animals are used in research, excellence in reporting is expected as a matter of continued ethical acceptability of animal use in the sciences. Our primary objective was to assess completeness of reporting for a series of studies relevant to mitigation of pain in neonatal piglets undergoing routine management procedures. Our second objective was to illustrate how authors can report the items in the Reporting guidElines For randomized controLled trials for livEstoCk and food safety (REFLECT) statement using examples from the animal welfare science literature. A total of 52 studies from 40 articles were evaluated using a modified REFLECT statement. No single study reported all REFLECT checklist items. Seven studies reported specific objectives with testable hypotheses. Six studies identified primary or secondary outcomes. Randomization and blinding were considered to be partially reported in 21 and 18 studies, respectively. No studies reported the rationale for sample sizes. Several studies failed to report key design features such as units for measurement, means, standard deviations, standard errors for continuous outcomes or comparative characteristics for categorical outcomes expressed as either rates or proportions. In the discipline of animal welfare science, authors, reviewers and editors are encouraged to use available reporting guidelines to ensure that scientific methods and results are adequately described and free of misrepresentations and inaccuracies. Complete and accurate reporting increases the ability to apply the results of studies to the decision-making process and prevent wastage of financial and animal resources.  相似文献   

14.
For scientific, ethical and economic reasons, experiments involving animals should be appropriately designed, correctly analysed and transparently reported. This increases the scientific validity of the results, and maximises the knowledge gained from each experiment. A minimum amount of relevant information must be included in scientific publications to ensure that the methods and results of a study can be reviewed, analysed and repeated. Omitting essential information can raise scientific and ethical concerns. We report the findings of a systematic survey of reporting, experimental design and statistical analysis in published biomedical research using laboratory animals. Medline and EMBASE were searched for studies reporting research on live rats, mice and non-human primates carried out in UK and US publicly funded research establishments. Detailed information was collected from 271 publications, about the objective or hypothesis of the study, the number, sex, age and/or weight of animals used, and experimental and statistical methods. Only 59% of the studies stated the hypothesis or objective of the study and the number and characteristics of the animals used. Appropriate and efficient experimental design is a critical component of high-quality science. Most of the papers surveyed did not use randomisation (87%) or blinding (86%), to reduce bias in animal selection and outcome assessment. Only 70% of the publications that used statistical methods described their methods and presented the results with a measure of error or variability. This survey has identified a number of issues that need to be addressed in order to improve experimental design and reporting in publications describing research using animals. Scientific publication is a powerful and important source of information; the authors of scientific publications therefore have a responsibility to describe their methods and results comprehensively, accurately and transparently, and peer reviewers and journal editors share the responsibility to ensure that published studies fulfil these criteria.  相似文献   

15.
16.
Meneghini R 《EMBO reports》2012,13(2):106-108
Emerging countries have established national scientific journals as an alternative publication route for their researchers. However, these journals eventually need to catch up to international standards.Since the first scientific journal was founded—The Philosophical Transactions of the Royal Society in 1665—the number of journals dedicated to publishing academic research has literally exploded. The Thomson Reuters Web of Knowledge database alone—which represents far less than the total number of academic journals—includes more than 11,000 journals from non-profit, society and commercial publishers, published in numerous languages and with content ranging from the natural sciences to the social sciences and humanities. Notwithstanding the sheer scale and diversity of academic publishing, however, there is a difference between the publishing enterprise in developed countries and emerging countries in terms of the commercial rationale behind the journals.…‘national'' or even ‘local'' journals are published and supported because they report important, practical information that would be declined by international journals…Although all academic journals seek to serve their readership by publishing the highest quality and most interesting advances, a growing trend in the twentieth century has also seen publishers in developed countries viewing academic publishing as a way of generating profit, and the desire of journal editors to publish the best and most interesting science thereby serves the commercial interest of publishers who want people to buy the publication.In emerging countries, however, there are few commercial reasons to publish a journal. Instead, ‘national'' or even ‘local'' journals are published and supported because they report important, practical information that would be declined by international journals, either because the topic is of only local or marginal interest, or because the research does not meet the high standards for publication at an international level. Consequently, most ‘national'' journals are not able to finance themselves and depend on public funding. In Brazil, for instance, the national journals account for one-third of the publications of all scientific articles from Brazil and are mostly funded by the government. Other emerging countries that invest in research—notably China, India and Russia—also have a sizable number of national journals, most of which are published in their native language.There is little competition between developed countries to publish the most or the best scientific journals. There is clear competition between the top-flight journals—Nature and Science, for example—but this competition is academically and/or commercially, rather than nationally, based. In fact, countries with similar scientific calibres in terms of the research they generate, differ greatly in terms of the number of journals published within their borders. According to the Thomson Reuters database, for example, the Netherlands, Switzerland and Sweden published 847, 202 and 30 scientific journal, respectively, in 2010—the Netherlands has been a traditional haven for publishers. However, the number of articles published by researchers in these countries in journals indexed by Thomson Reuters—a rough measurement of scientific productivity—does not differ significantly.To overcome the perceived dominance of international journals […] some emerging countries have increased the number of national journalsScientists who edit directly or serve on the editorial boards of high-quality, international journals have a major responsibility because they guide the direction and set the standards of scientific research. In deciding what to publish, they define the quality of research, promote emerging research areas and set the criteria by which research is judged to be new and exciting; they are the gatekeepers of science. The distribution of these scientists also reflects the division between developed and emerging countries in scientific publishing. Using the Netherlands, Switzerland and Sweden as examples, they respectively contributed 235, 256 and 160 scientists to the editorial teams or boards of 220 high-impact, selected journals in 2005 (Braun & Diospatonyi, 2005). These numbers are comparable with the scientific production of these countries in terms of publications. On the other hand, Brazil, South Korea and Russia, countries as scientifically productive in terms of total number of articles as the Netherlands, Switzerland and Sweden, contributed only 28, 29 and 55 ‘gatekeepers'', respectively. A principal reason for this difference is, of course, the more variable quality of the science produced in emerging countries, but it is nevertheless clear that their scientists are under-represented on the teams that define the course and standards of scientific research.To overcome the perceived dominance of international journals, and to address the significant barriers to getting published that their scientists face, some emerging countries have increased the number of national journals (Sumathipala et al, 2004). Such barriers have been well documented and include poor written English and the generally lower or more variable quality of the science produced in emerging countries. However, although English, which is the lingua franca of modern science (Meneghini & Packer, 2007), is not as great a barrier as some would claim, there is some evidence of a conscious or subconscious bias among reviewers and editors in judging articles from emerging countries. (Meneghini et al, 2008; Sumathipala et al, 2004).A third pressure has also forced some emerging countries to introduce more national journals in which to publish academic research from within their borders: greater scientific output. During the past two or three decades, several of these countries have made huge investments into research—notably China, India and Brazil, among others—which has enormously increased their scientific productivity. Initially, the new national journals aspired to adopt the rigid rules of peer review and the quality standards of international journals, but this approach did not produce satisfactory results in terms of the quality of papers published. On the one hand, it is hard for national journals to secure the expertise of scientists competent to review their submissions; on the other, the reviewers who do agree tend to be more lenient, ostensibly believing that peer review as rigorous as that of international journals would run counter to the purpose of making scientific results publicly available, at least on the national level.The establishment of national journals has, in effect, created two parallel communication streams for scientists in emerging countries: publication in international journals—the selective route—and publication in national journals—the regional route. On the basis of their perceived chances to be accepted by an international journal, authors can choose the route that gives them the best opportunity to make their results public. Economic conditions are also important as the resources to produce national journals come from government, so national journals can face budget cuts in times of austerity. In the worst case, this can lead to the demise of national journals to the disadvantage of authors who have built their careers by publishing in them.…to not publish, for any reason, is to break the process of science and potentially inhibit progressThere is some anecdotal evidence that authors who often or almost exclusively publish in international journals hold national journals in some contempt—they regard them as a way of avoiding the effort and hassle of publishing internationally. Moreover, although the way in which governments regard and support the divergent routes varies between countries, in general, scientists who endure and succeed through the selective route often receive more prestige and have more influence in shaping national science policies. Conversely, authors who choose the regional publication route regard their efforts as an important contribution to the dissemination of information generated by the national scientific community, which might otherwise remain locked away—by either language or access policies. Either way, it is worth mentioning that publication is obviously not the end point of a scientific discovery: the results should feed into the pool of knowledge and might inspire other researchers to pursue new avenues or devise new experiments. Hence, to not publish, for any reason, is to break the process of science and potentially inhibit progress.The choice of pursuing publication in regional or international journals also has direct consequences for the research being published. The selective, international route ensures greater visibility, especially if the paper is published in a high-impact journal. The regional route also makes the results and experiments public, but it fails to attract international visibility, in particular if the research is not published in English.It seems that, for the foreseeable future, this scenario will not change. If it is to change, however, then the revolution must be driven by the national journals. In fact, a change that raises the quality and value of national journals would be prudent because it would give scientists from emerging countries the opportunity to sit on the editorial boards of, or referee for, the resulting high-quality national journals. In this way, the importance of national journals would be enhanced and scientists from emerging countries would invest effort and gain experience in serving as editors or referees.The regional route has various weaknesses, however, the most important of which is the peer-review process. Peer-review at national journals is simply of a lower standard owing to several factors that include a lack of training in objective research assessment, greater leniency and tolerance of poor-quality science, and an unwillingness by top researchers to participate because they prefer to give their time to the selective journals. This creates an awkward situation: on the one hand, the inability to properly assess submissions, and on the other hand, a lack of motivation to do so.Notwithstanding these difficulties, most editors and authors of national journals hope that their publications will ultimately be recognized as visible, reliable sources of information, and not only as instruments to communicate national research to the public. In other words, their aspiration is not only to publish good science—albeit of lesser interest to international journals—but also to attain the second or third quartiles of impact factors in their areas. These journals should eventually be good enough to compete with the international ones, mitigating their national character and attracting authors from other countries.The key is to raise the assessment procedures at national journals to international standards, and to professionalize their operations. Both goals are interdependent. The vast majority of national journals are published by societies and research organizations and their editorial structures are often limited to local researchers. As a result, they are shoestring operations that lack proper administrative support and international input, and can come across as amateurish. The SciELO (Scientific Electronic Library Online), which indexes national journals and measures their quality, can require certain changes when it indexes a journal, including the requirement to internationalize the editorial body or board.…experienced international editors should be brought in to strengthen national journals, raise their quality and educate local editors…In terms of improving this status quo, a range of other changes could be introduced. First, more decision-making authority should be given to publishers to decide how to structure the editorial body. The choice of ad hoc assistants—that is, professional scientists who can lend expertise at the editorial level should be selected by the editors—who should also assess journal performance. Moreover, publishers should try to attract international scientists with editorial experience to join a core group of two or three chief or senior editors. Their English skills, their experience in their research field and their influence in the community would catalyse a rapid improvement of the journals and their quality. In other words, experienced international editors should be brought in to strengthen national journals, raise their quality and educate local editors with the long-term objective to join the international scientific editing community. It would eventually merge the national and the selective routes of publishing into a single international route of scientific communication.Of course, there is a long way to go. The problem is that many societies and organizations do not have sufficient resources—money or experience—to attract international scientists as editors. However, new publishing and financial models could provide incentives to attract this kind of expertise. Ultimately, relying on government money alone is neither a reliable nor sufficient source of income to make national journals successful. One way of enhancing revenue streams might be to switch to an open-access model that would charge author fees that could be reinvested to improve the journals. In Brazil, for instance, almost all journals have adopted the open access model (Hedlund et al, 2004). The author fees—around US$1,250—if adopted, would provide financial support for increasing the quality and performance of the journals. Moreover, increased competition between journals at a national level should create a more dynamic and competitive situation among journals, raising the general quality of the science they publish. This would also feed back to the scientific community and help to raise the general standards of science in emerging countries.  相似文献   

17.
Simulation-based medicine and the development of complex computer models of biological structures is becoming ubiquitous for advancing biomedical engineering and clinical research. Finite element analysis (FEA) has been widely used in the last few decades to understand and predict biomechanical phenomena. Modeling and simulation approaches in biomechanics are highly interdisciplinary, involving novice and skilled developers in all areas of biomedical engineering and biology. While recent advances in model development and simulation platforms offer a wide range of tools to investigators, the decision making process during modeling and simulation has become more opaque. Hence, reliability of such models used for medical decision making and for driving multiscale analysis comes into question. Establishing guidelines for model development and dissemination is a daunting task, particularly with the complex and convoluted models used in FEA. Nonetheless, if better reporting can be established, researchers will have a better understanding of a model's value and the potential for reusability through sharing will be bolstered. Thus, the goal of this document is to identify resources and considerate reporting parameters for FEA studies in biomechanics. These entail various levels of reporting parameters for model identification, model structure, simulation structure, verification, validation, and availability. While we recognize that it may not be possible to provide and detail all of the reporting considerations presented, it is possible to establish a level of confidence with selective use of these parameters. More detailed reporting, however, can establish an explicit outline of the decision-making process in simulation-based analysis for enhanced reproducibility, reusability, and sharing.  相似文献   

18.

Background

Research needs to be reported transparently so readers can critically assess the strengths and weaknesses of the design, conduct, and analysis of studies. Reporting guidelines have been developed to inform reporting for a variety of study designs. The objective of this study was to identify whether there is a need to develop a reporting guideline for survey research.

Methods and Findings

We conducted a three-part project: (1) a systematic review of the literature (including “Instructions to Authors” from the top five journals of 33 medical specialties and top 15 general and internal medicine journals) to identify guidance for reporting survey research; (2) a systematic review of evidence on the quality of reporting of surveys; and (3) a review of reporting of key quality criteria for survey research in 117 recently published reports of self-administered surveys. Fewer than 7% of medical journals (n = 165) provided guidance to authors on survey research despite a majority having published survey-based studies in recent years. We identified four published checklists for conducting or reporting survey research, none of which were validated. We identified eight previous reviews of survey reporting quality, which focused on issues of non-response and accessibility of questionnaires. Our own review of 117 published survey studies revealed that many items were poorly reported: few studies provided the survey or core questions (35%), reported the validity or reliability of the instrument (19%), defined the response rate (25%), discussed the representativeness of the sample (11%), or identified how missing data were handled (11%).

Conclusions

There is limited guidance and no consensus regarding the optimal reporting of survey research. The majority of key reporting criteria are poorly reported in peer-reviewed survey research articles. Our findings highlight the need for clear and consistent reporting guidelines specific to survey research. Please see later in the article for the Editors'' Summary  相似文献   

19.
20.

Background

The CONSORT Statement provides recommendations for reporting randomized controlled trials. We assessed the extent to which leading medical journals that publish reports of randomized trials incorporate the CONSORT recommendations into their journal and editorial processes.

Methods

This article reports on two observational studies. Study 1: We examined the online version of 'Instructions to Authors' for 165 high impact factor medical journals and extracted all text mentioning the CONSORT Statement or CONSORT extension papers. Any mention of the International Committee of Medical Journal Editors (ICMJE) or clinical trial registration were also sought and extracted. Study 2: We surveyed the editor-in-chief, or editorial office, for each of the 165 journals about their journal's endorsement of CONSORT recommendations and its incorporation into their editorial and peer-review processes.

Results

Study 1: Thirty-eight percent (62/165) of journals mentioned the CONSORT Statement in their online 'Instructions to Authors'; of these 37% (23/62) stated this was a requirement, 63% (39/62) were less clear in their recommendations. Very few journals mentioned the CONSORT extension papers. Journals that referred to CONSORT were more likely to refer to ICMJE guidelines (RR 2.16; 95% CI 1.51 to 3.08) and clinical trial registration (RR 3.67; 95% CI 2.36 to 5.71) than those journals which did not. Study 2: Thirty-nine percent (64/165) of journals responded to the on-line survey, the majority were journal editors. Eighty-eight percent (50/57) of journals recommended authors comply with the CONSORT Statement; 62% (35/56) said they would require this. Forty-one percent (22/53) reported incorporating CONSORT into their peer-review process and 47% (25/53) into their editorial process. Eighty-one percent (47/58) reported including CONSORT in their 'Instructions to Authors' although there was some inconsistency when cross checking information on the journal's website. Sixty-nine percent (31/45) of journals recommended authors comply with the CONSORT extension for cluster trials, 60% (27/45) for harms and 42% (19/45) for non-inferiority and equivalence trials. Few journals mentioned these extensions in their 'Instructions to Authors'.

Conclusion

Journals should be more explicit in their recommendations and expectations of authors regarding the CONSORT Statement and related CONSORT extensions papers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号