首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Background

JAMA introduced a requirement for independent statistical analysis for industry-funded trials in July 2005. We wanted to see whether this policy affected the number of industry-funded trials published by JAMA.

Methods and Findings

We undertook a retrospective, before-and-after study of published papers. Two investigators independently extracted data from all issues of JAMA published between 1 July 2002 and 30 June 2008 (i.e., three years before and after the policy). They were not blinded to publication date. The randomized controlled trials (RCTs) were classified as industry funded (IF), joint industry/non-commercial funding (J), industry supported (IS) (when manufacturers provided materials only), non-commercial (N) or funding not stated (NS). Findings were compared and discrepancies resolved by discussion or further analysis of the reports. RCTs published in The Lancet and NEJM over the same period were used as a control group. Between July 2002 and July 2008, JAMA published 1,314 papers, of which 311 were RCTs. The number of industry studies (IF, J or IS) fell significantly after the policy (p = 0.02) especially for categories J and IS. However, over the same period, the number of industry studies rose in both The Lancet and NEJM.

Conclusions

After the requirement for independent statistical analysis for industry-funded studies, JAMA published significantly fewer RCTs and significantly fewer industry-funded RCTs. This pattern was not seen in the control journals. This suggests the JAMA policy affected the number of submissions, the acceptance rate, or both. Without analysing the submissions, we cannot check these hypotheses but, assuming the number of published papers is related to the number submitted, our findings suggest that JAMA''s policy may have resulted in a significant reduction in the number of industry-sponsored trials it received and published.  相似文献   

2.

Background

Transparency in reporting of conflict of interest is an increasingly important aspect of publication in medical journals. Publication of large industry-supported trials may generate many citations and journal income through reprint sales and thereby be a source of conflicts of interest for journals. We investigated industry-supported trials'' influence on journal impact factors and revenue.

Methods and Findings

We sampled six major medical journals (Annals of Internal Medicine, Archives of Internal Medicine, BMJ, JAMA, The Lancet, and New England Journal of Medicine [NEJM]). For each journal, we identified randomised trials published in 1996–1997 and 2005–2006 using PubMed, and categorized the type of financial support. Using Web of Science, we investigated citations of industry-supported trials and the influence on journal impact factors over a ten-year period. We contacted journal editors and retrieved tax information on income from industry sources. The proportion of trials with sole industry support varied between journals, from 7% in BMJ to 32% in NEJM in 2005–2006. Industry-supported trials were more frequently cited than trials with other types of support, and omitting them from the impact factor calculation decreased journal impact factors. The decrease varied considerably between journals, with 1% for BMJ to 15% for NEJM in 2007. For the two journals disclosing data, income from the sales of reprints contributed to 3% and 41% of the total income for BMJ and The Lancet in 2005–2006.

Conclusions

Publication of industry-supported trials was associated with an increase in journal impact factors. Sales of reprints may provide a substantial income. We suggest that journals disclose financial information in the same way that they require them from their authors, so that readers can assess the potential effect of different types of papers on journals'' revenue and impact. Please see later in the article for the Editors'' Summary  相似文献   

3.

Background

The placement of medical research news on a newspaper''s front page is intended to gain the public''s attention, so it is important to understand the source of the news in terms of research maturity and evidence level.

Methodology/Principal Findings

We searched LexisNexis to identify medical research reported on front pages of major newspapers published from January 1, 2000 to December 31, 2002. We used MEDLINE and Google Scholar to find journal articles corresponding to the research, and determined their evidence level.Of 734 front-page medical research stories identified, 417 (57%) referred to mature research published in peer-reviewed journals. The remaining 317 stories referred to preliminary findings presented at scientific or press meetings; 144 (45%) of those stories mentioned studies that later matured (i.e. were published in journals within 3 years after news coverage). The evidence-level distribution of the 515 journal articles quoted in news stories reporting on mature research (3% level I, 21% level II, 42% level III, 4% level IV, and 31% level V) differed from that of the 170 reports of preliminary research that later matured (1%, 19%, 35%, 12%, and 33%, respectively; chi-square test, P = .0009). No news stories indicated evidence level. Fewer than 1 in 5 news stories reporting preliminary findings acknowledged the preliminary nature of their content.

Conclusions/Significance

Only 57% of front-page stories reporting on medical research are based on mature research, which tends to have a higher evidence level than research with preliminary findings. Medical research news should be clearly referenced and state the evidence level and limitations to inform the public of the maturity and quality of the source.  相似文献   

4.

Background

Clear, transparent and sufficiently detailed abstracts of randomized trials (RCTs), published in journal articles are important because readers will often base their initial assessment of a trial on such information. However, little is known about the quality of reporting in abstracts of RCTs published in medical journals in China.

Methods

We identified RCTs abstracts from 5 five leading Chinese medical journals published between 1998 and 2007 and indexed in MEDLINE. We assessed the quality of reporting of these abstracts based on the Consolidated Standards of Reporting Trials (CONSORT) abstract checklist. We also sought to identify whether any differences exist in reporting between the Chinese and English language version of the same abstract.

Results

We identified 332 RCT abstracts eligible for examination. Overall, the abstracts we examined reported 0–8 items as designated in the CONSORT checklist. On average, three items were reported per abstract. Details of the interventions (288/332; 87%), the number of participants randomized (216/332; 65%) and study objectives (109/332; 33%) were the top three items reported. Only two RCT abstracts reported details of trial registration, no abstracts reported the method of allocation concealment and only one mentioned specifically who was blinded. In terms of the proportion of RCT abstracts fulfilling a criterion, the absolute difference (percentage points) between the Chinese and English abstracts was 10% (ranging from 0 to 25%) on average, per item.

Conclusions

The quality of reporting in abstracts of RCTs published in Chinese medical journals needs to be improved. We hope that the introduction and endorsement of the CONSORT for Abstracts guidelines by journals reporting RCTs will lead to improvements in the quality of reporting.  相似文献   

5.

Background

Influential medical journals shape medical science and practice and their prestige is usually appraised by citation impact metrics, such as the journal impact factor. However, how permanent are medical journals and how stable is their impact over time?

Methods and Results

We evaluated what happened to general medical journals that were publishing papers half a century ago, in 1959. Data were retrieved from ISI Web of Science for citations and PubMed (Journals function) for journal history. Of 27 eligible journals publishing in 1959, 4 have stopped circulation (including two of the most prestigious journals in 1959) and another 7 changed name between 1959 and 2009. Only 6 of these 27 journals have been published continuously with their initial name since they started circulation. The citation impact of papers published in 1959 gives a very different picture from the current journal impact factor; the correlation between the two is non-significant and very close to zero. Only 13 of the 5,223 papers published in 1959 received at least 5 citations in 2009.

Conclusions

Journals are more permanent entities than single papers, but they are also subject to major change and their relative prominence can change markedly over time.  相似文献   

6.
7.

Background

The reporting of outcomes within published randomized trials has previously been shown to be incomplete, biased and inconsistent with study protocols. We sought to determine whether outcome reporting bias would be present in a cohort of government-funded trials subjected to rigorous peer review.

Methods

We compared protocols for randomized trials approved for funding by the Canadian Institutes of Health Research (formerly the Medical Research Council of Canada) from 1990 to 1998 with subsequent reports of the trials identified in journal publications. Characteristics of reported and unreported outcomes were recorded from the protocols and publications. Incompletely reported outcomes were defined as those with insufficient data provided in publications for inclusion in meta-analyses. An overall odds ratio measuring the association between completeness of reporting and statistical significance was calculated stratified by trial. Finally, primary outcomes specified in trial protocols were compared with those reported in publications.

Results

We identified 48 trials with 68 publications and 1402 outcomes. The median number of participants per trial was 299, and 44% of the trials were published in general medical journals. A median of 31% (10th–90th percentile range 5%–67%) of outcomes measured to assess the efficacy of an intervention (efficacy outcomes) and 59% (0%–100%) of those measured to assess the harm of an intervention (harm outcomes) per trial were incompletely reported. Statistically significant efficacy outcomes had a higher odds than nonsignificant efficacy outcomes of being fully reported (odds ratio 2.7; 95% confidence interval 1.5–5.0). Primary outcomes differed between protocols and publications for 40% of the trials.

Interpretation

Selective reporting of outcomes frequently occurs in publications of high-quality government-funded trials.Selective reporting of results from randomized trials can occur either at the level of end points within published studies (outcome reporting bias)1 or at the level of entire trials that are selectively published (study publication bias).2 Outcome reporting bias has previously been demonstrated in a broad cohort of published trials approved by a regional ethics committee.1 The Canadian Institutes of Health Research (CIHR) — the primary federal funding agency, known before 2000 as the Medical Research Council of Canada (MRC) — recognized the need to address this issue and conducted an internal review process in 2002 to evaluate the reporting of results from its funded trials. The primary objectives were to determine (a) the prevalence of incomplete outcome reporting in journal publications of randomized trials; (b) the degree of association between adequate outcome reporting and statistical significance; and (c) the consistency between primary outcomes specified in trial protocols and those specified in subsequent journal publications.  相似文献   

8.
Author self-citation in the general medicine literature   总被引:1,自引:0,他引:1  
Kulkarni AV  Aziz B  Shams I  Busse JW 《PloS one》2011,6(6):e20885

Background

Author self-citation contributes to the overall citation count of an article and the impact factor of the journal in which it appears. Little is known, however, about the extent of self-citation in the general clinical medicine literature. The objective of this study was to determine the extent and temporal pattern of author self-citation and the article characteristics associated with author self-citation.

Methodology/Principal Findings

We performed a retrospective cohort study of articles published in three high impact general medical journals (JAMA, Lancet, and New England Journal of Medicine) between October 1, 1999 and March 31, 2000. We retrieved the number and percentage of author self-citations received by the article since publication, as of June 2008, from the Scopus citation database. Several article characteristics were extracted by two blinded, independent reviewers for each article in the cohort and analyzed in multivariable linear regression analyses. Since publication, author self-citations accounted for 6.5% (95% confidence interval 6.3–6.7%) of all citations received by the 328 articles in our sample. Self-citation peaked in 2002, declining annually thereafter. Studies with more authors, in cardiovascular medicine or infectious disease, and with smaller sample size were associated with more author self-citations and higher percentage of author self-citation (all p≤0.01).

Conclusions/Significance

Approximately 1 in 15 citations of articles in high-profile general medicine journals are author self-citations. Self-citation peaks within about 2 years of publication and disproportionately affects impact factor. Studies most vulnerable to this effect are those with more authors, small sample size, and in cardiovascular medicine or infectious disease.  相似文献   

9.
10.

Background

The widespread reluctance to share published research data is often hypothesized to be due to the authors'' fear that reanalysis may expose errors in their work or may produce conclusions that contradict their own. However, these hypotheses have not previously been studied systematically.

Methods and Findings

We related the reluctance to share research data for reanalysis to 1148 statistically significant results reported in 49 papers published in two major psychology journals. We found the reluctance to share data to be associated with weaker evidence (against the null hypothesis of no effect) and a higher prevalence of apparent errors in the reporting of statistical results. The unwillingness to share data was particularly clear when reporting errors had a bearing on statistical significance.

Conclusions

Our findings on the basis of psychological papers suggest that statistical results are particularly hard to verify when reanalysis is more likely to lead to contrasting conclusions. This highlights the importance of establishing mandatory data archiving policies.  相似文献   

11.

Background

Citation analysis has become an important tool for research performance assessment in the medical sciences. However, different areas of medical research may have considerably different citation practices, even within the same medical field. Because of this, it is unclear to what extent citation-based bibliometric indicators allow for valid comparisons between research units active in different areas of medical research.

Methodology

A visualization methodology is introduced that reveals differences in citation practices between medical research areas. The methodology extracts terms from the titles and abstracts of a large collection of publications and uses these terms to visualize the structure of a medical field and to indicate how research areas within this field differ from each other in their average citation impact.

Results

Visualizations are provided for 32 medical fields, defined based on journal subject categories in the Web of Science database. The analysis focuses on three fields: Cardiac & cardiovascular systems, Clinical neurology, and Surgery. In each of these fields, there turn out to be large differences in citation practices between research areas. Low-impact research areas tend to focus on clinical intervention research, while high-impact research areas are often more oriented on basic and diagnostic research.

Conclusions

Popular bibliometric indicators, such as the h-index and the impact factor, do not correct for differences in citation practices between medical fields. These indicators therefore cannot be used to make accurate between-field comparisons. More sophisticated bibliometric indicators do correct for field differences but still fail to take into account within-field heterogeneity in citation practices. As a consequence, the citation impact of clinical intervention research may be substantially underestimated in comparison with basic and diagnostic research.  相似文献   

12.

Background

Systematic reviews (SRs) and meta-analyses (MAs) provide the highest possible level of evidence. However, poor conduct or reporting of SRs and MAs may reduce their utility. The PRISMA Statement (Preferred Reporting Items for Systematic reviews and Meta-Analyses) was developed to help authors report their SRs and MAs adequately.

Objectives

Our objectives were to (1) evaluate the quality of reporting of SRs and MAs and their abstracts in otorhinolaryngologic literature using the PRISMA and PRISMA for Abstracts checklists, respectively, (2) compare the quality of reporting of SRs and MAs published in Ear Nose Throat (ENT) journals to the quality of SRs and MAs published in the ‘gold standard’ Cochrane Database of Systematic Reviews (CDSR), and (3) formulate recommendations to improve reporting of SRs and MAs in ENT journals.

Methods

On September 3, 2014, we searched the Pubmed database using a combination of filters to retrieve SRs and MAs on otorhinolaryngologic topics published in 2012 and 2013 in the top 5 ENT journals (ISI Web of Knowledge 2013) or CDSR and relevant articles were selected. We assessed how many, and which, PRISMA (for Abstracts) items were reported adequately per journal type.

Results

We identified large differences in the reporting of individual items between the two journal types with room for improvement. In general, SRs and MAs published in ENT journals (n = 31) reported a median of 54.4% of the PRISMA items adequately, whereas the 49 articles published in the CDSR reported a median of 100.0 adequately (difference statistically significant, p < 0.001). For abstracts, medians of 41.7% for ENT journals and 75.0% for the CDSR were found (p < 0.001).

Conclusion

The reporting of SRs and MAs in ENT journals leaves room for improvement and would benefit if the PRISMA Statement were endorsed by these journals.  相似文献   

13.

Background

We investigated whether there had been an improvement in quality of reporting for randomised controlled trials of acupuncture since the publication of the STRICTA and CONSORT statements. We conducted a before-and-after study, comparing ratings for quality of reporting following the publication of both STRICTA and CONSORT recommendations.

Methodology and Principal Findings

Ninety peer reviewed journal articles reporting the results of acupuncture trials were selected at random from a wider sample frame of 266 papers. Papers published in three distinct time periods (1994–1995, 1999–2000 and 2004–2005) were compared. Assessment criteria were developed directly from CONSORT and STRICTA checklists. Papers were independently assessed for quality of reporting by two assessors, one of whom was blind to information which could have introduced systematic bias (e.g. date of publication). We detected a statistically significant increase in the reporting of CONSORT items for papers published in each time period measured. We did not, however, find a difference between the number of STRICTA items reported in journal articles published before and 3 to 4 years following the introduction of STRICTA recommendations.

Conclusions and Significance

The results of this study suggest that general standards of reporting for acupuncture trials have significantly improved since the introduction of the CONSORT statement in 1996, but that quality in reporting details specific to acupuncture interventions has yet to change following the more recent introduction of STRICTA recommendations. Wider targeting and revision of the guidelines is recommended.  相似文献   

14.

Context

Problem-solving in terms of clinical reasoning is regarded as a key competence of medical doctors. Little is known about the general cognitive actions underlying the strategies of problem-solving among medical students. In this study, a theory-based model was used and adapted in order to investigate the cognitive actions in which medical students are engaged when dealing with a case and how patterns of these actions are related to the correct solution.

Methods

Twenty-three medical students worked on three cases on clinical nephrology using the think-aloud method. The transcribed recordings were coded using a theory-based model consisting of eight different cognitive actions. The coded data was analysed using time sequences in a graphical representation software. Furthermore the relationship between the coded data and accuracy of diagnosis was investigated with inferential statistical methods.

Results

The observation of all main actions in a case elaboration, including evaluation, representation and integration, was considered a complete model and was found in the majority of cases (56%). This pattern significantly related to the accuracy of the case solution (φ = 0.55; p<.001). Extent of prior knowledge was neither related to the complete model nor to the correct solution.

Conclusions

The proposed model is suitable to empirically verify the cognitive actions of problem-solving of medical students. The cognitive actions evaluation, representation and integration are crucial for the complete model and therefore for the accuracy of the solution. The educational implication which may be drawn from this study is to foster students reasoning by focusing on higher level reasoning.  相似文献   

15.
16.

Background

Epigenome-wide association studies of human disease and other quantitative traits are becoming increasingly common. A series of papers reporting age-related changes in DNA methylation profiles in peripheral blood have already been published. However, blood is a heterogeneous collection of different cell types, each with a very different DNA methylation profile.

Results

Using a statistical method that permits estimating the relative proportion of cell types from DNA methylation profiles, we examine data from five previously published studies, and find strong evidence of cell composition change across age in blood. We also demonstrate that, in these studies, cellular composition explains much of the observed variability in DNA methylation. Furthermore, we find high levels of confounding between age-related variability and cellular composition at the CpG level.

Conclusions

Our findings underscore the importance of considering cell composition variability in epigenetic studies based on whole blood and other heterogeneous tissue sources. We also provide software for estimating and exploring this composition confounding for the Illumina 450k microarray.  相似文献   

17.

Background

Disaster is a serious public health issue. Health professionals and community residents are main players in disaster responses but their knowledge levels of disaster medicine are not readily available. This study aimed to evaluate knowledge levels and training needs of disaster medicine among potential disaster responders and presented a necessity to popularize disaster medicine education.

Methods

A self-reporting questionnaire survey on knowledge level and training needs of disaster medicine was conducted in Shanghai, China, in 2012. A total of randomly selected 547 health professionals, 456 medical students, and 1,526 local residents provided intact information. The total response rate was 93.7%.

Results

Overall, 1.3% of these participants have received systematic disaster medicine training. News media (87.1%) was the most common channel to acquire disaster medicine knowledge. Although health professionals were more knowledgeable than community residents, their knowledge structure of disaster medicine was not intact. Medical teachers were more knowledgeable than medical practitioners and health administrators (p = 0.002). Clinicians performed better than public health physicians (p<0.001), whereas public health students performed better than clinical medical students (p<0.001). In community residents, education background significantly affected the knowledge level on disaster medicine (p<0.001). Training needs of disaster medicine were generally high among the surveyed. ‘Lecture’ and ‘practical training’ were preferred teaching methods. The selected key and interested contents on disaster medicine training were similar between health professionals and medical students, while the priorities chosen by local residents were quite different from health professionals and medical students (p<0.001).

Conclusions

Traditional clinical-oriented medical education might lead to a huge gap between the knowledge level on disaster medicine and the current needs of disaster preparedness. Continuing medical education and public education plans on disaster medicine via media should be practice-oriented, and selectively applied to different populations and take the knowledge levels and training needs into consideration.  相似文献   

18.

Background

Research needs to be reported transparently so readers can critically assess the strengths and weaknesses of the design, conduct, and analysis of studies. Reporting guidelines have been developed to inform reporting for a variety of study designs. The objective of this study was to identify whether there is a need to develop a reporting guideline for survey research.

Methods and Findings

We conducted a three-part project: (1) a systematic review of the literature (including “Instructions to Authors” from the top five journals of 33 medical specialties and top 15 general and internal medicine journals) to identify guidance for reporting survey research; (2) a systematic review of evidence on the quality of reporting of surveys; and (3) a review of reporting of key quality criteria for survey research in 117 recently published reports of self-administered surveys. Fewer than 7% of medical journals (n = 165) provided guidance to authors on survey research despite a majority having published survey-based studies in recent years. We identified four published checklists for conducting or reporting survey research, none of which were validated. We identified eight previous reviews of survey reporting quality, which focused on issues of non-response and accessibility of questionnaires. Our own review of 117 published survey studies revealed that many items were poorly reported: few studies provided the survey or core questions (35%), reported the validity or reliability of the instrument (19%), defined the response rate (25%), discussed the representativeness of the sample (11%), or identified how missing data were handled (11%).

Conclusions

There is limited guidance and no consensus regarding the optimal reporting of survey research. The majority of key reporting criteria are poorly reported in peer-reviewed survey research articles. Our findings highlight the need for clear and consistent reporting guidelines specific to survey research. Please see later in the article for the Editors'' Summary  相似文献   

19.

Background

The WHO estimates that 13% of maternal mortality is due to unsafe abortion, but challenges with measurement and data quality persist. To our knowledge, no systematic assessment of the validity of studies reporting estimates of abortion-related mortality exists.

Study Design

To be included in this study, articles had to meet the following criteria: (1) published between September 1st, 2000-December 1st, 2011; (2) utilized data from a country where abortion is “considered unsafe”; (3) specified and enumerated causes of maternal death including “abortion”; (4) enumerated ≥100 maternal deaths; (5) a quantitative research study; (6) published in a peer-reviewed journal.

Results

7,438 articles were initially identified. Thirty-six studies were ultimately included. Overall, studies rated “Very Good” found the highest estimates of abortion related mortality (median 16%, range 1–27.4%). Studies rated “Very Poor” found the lowest overall proportion of abortion related deaths (median: 2%, range 1.3–9.4%).

Conclusions

Improvements in the quality of data collection would facilitate better understanding global abortion-related mortality. Until improved data exist, better reporting of study procedures and standardization of the definition of abortion and abortion-related mortality should be encouraged.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号