首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Studies published in the medical literature often neglect to consider the statistical power needed to detect a meaningful difference between study groups. Small sample sizes tend to produce negative results because of low statistical power. Studies that cannot make conclusive statements about their hypotheses can waste resources, deter further research, and impede advances in clinical treatment. The current study reviewed three of the most frequently read plastic surgery journals from 1976 to 1996 to determine the prevalence of inadequately (<80 percent) powered clinical trials and experimental studies that found no difference (negative studies) in the response variable of interest between comparison groups. The statistical power of 54 negative studies using continuous response variables was calculated to detect a difference of 1 SD (+/-1 SD) in means between the comparative groups. The power of another 57 negative studies with dichotomous response (yes/no) variables was calculated to detect a relative change in proportions of 25 percent and 50 percent from the experimental to the control group. It was found that 85 percent of the studies with continuous response variables had inadequate power to detect the desired mean difference of +/-1 SD. In studies with dichotomous response variables, 98 percent had inadequate power to detect a desired 25 percent relative change in proportions, and 74 percent had inadequate power to detect a desired 50 percent relative change in proportions. These results indicate that many of the studies in the plastic surgery literature lack adequate power to detect a moderate-to-large difference between groups. The lack of power makes the interpretation of the studies with negative findings inconclusive. Proper study design dictates that investigators consider a priori the difference between groups that is of clinical interest, and the sample size per group that is needed to provide adequate statistical power to detect the desired difference.  相似文献   

2.
Meneghini R 《EMBO reports》2012,13(2):106-108
Emerging countries have established national scientific journals as an alternative publication route for their researchers. However, these journals eventually need to catch up to international standards.Since the first scientific journal was founded—The Philosophical Transactions of the Royal Society in 1665—the number of journals dedicated to publishing academic research has literally exploded. The Thomson Reuters Web of Knowledge database alone—which represents far less than the total number of academic journals—includes more than 11,000 journals from non-profit, society and commercial publishers, published in numerous languages and with content ranging from the natural sciences to the social sciences and humanities. Notwithstanding the sheer scale and diversity of academic publishing, however, there is a difference between the publishing enterprise in developed countries and emerging countries in terms of the commercial rationale behind the journals.…‘national'' or even ‘local'' journals are published and supported because they report important, practical information that would be declined by international journals…Although all academic journals seek to serve their readership by publishing the highest quality and most interesting advances, a growing trend in the twentieth century has also seen publishers in developed countries viewing academic publishing as a way of generating profit, and the desire of journal editors to publish the best and most interesting science thereby serves the commercial interest of publishers who want people to buy the publication.In emerging countries, however, there are few commercial reasons to publish a journal. Instead, ‘national'' or even ‘local'' journals are published and supported because they report important, practical information that would be declined by international journals, either because the topic is of only local or marginal interest, or because the research does not meet the high standards for publication at an international level. Consequently, most ‘national'' journals are not able to finance themselves and depend on public funding. In Brazil, for instance, the national journals account for one-third of the publications of all scientific articles from Brazil and are mostly funded by the government. Other emerging countries that invest in research—notably China, India and Russia—also have a sizable number of national journals, most of which are published in their native language.There is little competition between developed countries to publish the most or the best scientific journals. There is clear competition between the top-flight journals—Nature and Science, for example—but this competition is academically and/or commercially, rather than nationally, based. In fact, countries with similar scientific calibres in terms of the research they generate, differ greatly in terms of the number of journals published within their borders. According to the Thomson Reuters database, for example, the Netherlands, Switzerland and Sweden published 847, 202 and 30 scientific journal, respectively, in 2010—the Netherlands has been a traditional haven for publishers. However, the number of articles published by researchers in these countries in journals indexed by Thomson Reuters—a rough measurement of scientific productivity—does not differ significantly.To overcome the perceived dominance of international journals […] some emerging countries have increased the number of national journalsScientists who edit directly or serve on the editorial boards of high-quality, international journals have a major responsibility because they guide the direction and set the standards of scientific research. In deciding what to publish, they define the quality of research, promote emerging research areas and set the criteria by which research is judged to be new and exciting; they are the gatekeepers of science. The distribution of these scientists also reflects the division between developed and emerging countries in scientific publishing. Using the Netherlands, Switzerland and Sweden as examples, they respectively contributed 235, 256 and 160 scientists to the editorial teams or boards of 220 high-impact, selected journals in 2005 (Braun & Diospatonyi, 2005). These numbers are comparable with the scientific production of these countries in terms of publications. On the other hand, Brazil, South Korea and Russia, countries as scientifically productive in terms of total number of articles as the Netherlands, Switzerland and Sweden, contributed only 28, 29 and 55 ‘gatekeepers'', respectively. A principal reason for this difference is, of course, the more variable quality of the science produced in emerging countries, but it is nevertheless clear that their scientists are under-represented on the teams that define the course and standards of scientific research.To overcome the perceived dominance of international journals, and to address the significant barriers to getting published that their scientists face, some emerging countries have increased the number of national journals (Sumathipala et al, 2004). Such barriers have been well documented and include poor written English and the generally lower or more variable quality of the science produced in emerging countries. However, although English, which is the lingua franca of modern science (Meneghini & Packer, 2007), is not as great a barrier as some would claim, there is some evidence of a conscious or subconscious bias among reviewers and editors in judging articles from emerging countries. (Meneghini et al, 2008; Sumathipala et al, 2004).A third pressure has also forced some emerging countries to introduce more national journals in which to publish academic research from within their borders: greater scientific output. During the past two or three decades, several of these countries have made huge investments into research—notably China, India and Brazil, among others—which has enormously increased their scientific productivity. Initially, the new national journals aspired to adopt the rigid rules of peer review and the quality standards of international journals, but this approach did not produce satisfactory results in terms of the quality of papers published. On the one hand, it is hard for national journals to secure the expertise of scientists competent to review their submissions; on the other, the reviewers who do agree tend to be more lenient, ostensibly believing that peer review as rigorous as that of international journals would run counter to the purpose of making scientific results publicly available, at least on the national level.The establishment of national journals has, in effect, created two parallel communication streams for scientists in emerging countries: publication in international journals—the selective route—and publication in national journals—the regional route. On the basis of their perceived chances to be accepted by an international journal, authors can choose the route that gives them the best opportunity to make their results public. Economic conditions are also important as the resources to produce national journals come from government, so national journals can face budget cuts in times of austerity. In the worst case, this can lead to the demise of national journals to the disadvantage of authors who have built their careers by publishing in them.…to not publish, for any reason, is to break the process of science and potentially inhibit progressThere is some anecdotal evidence that authors who often or almost exclusively publish in international journals hold national journals in some contempt—they regard them as a way of avoiding the effort and hassle of publishing internationally. Moreover, although the way in which governments regard and support the divergent routes varies between countries, in general, scientists who endure and succeed through the selective route often receive more prestige and have more influence in shaping national science policies. Conversely, authors who choose the regional publication route regard their efforts as an important contribution to the dissemination of information generated by the national scientific community, which might otherwise remain locked away—by either language or access policies. Either way, it is worth mentioning that publication is obviously not the end point of a scientific discovery: the results should feed into the pool of knowledge and might inspire other researchers to pursue new avenues or devise new experiments. Hence, to not publish, for any reason, is to break the process of science and potentially inhibit progress.The choice of pursuing publication in regional or international journals also has direct consequences for the research being published. The selective, international route ensures greater visibility, especially if the paper is published in a high-impact journal. The regional route also makes the results and experiments public, but it fails to attract international visibility, in particular if the research is not published in English.It seems that, for the foreseeable future, this scenario will not change. If it is to change, however, then the revolution must be driven by the national journals. In fact, a change that raises the quality and value of national journals would be prudent because it would give scientists from emerging countries the opportunity to sit on the editorial boards of, or referee for, the resulting high-quality national journals. In this way, the importance of national journals would be enhanced and scientists from emerging countries would invest effort and gain experience in serving as editors or referees.The regional route has various weaknesses, however, the most important of which is the peer-review process. Peer-review at national journals is simply of a lower standard owing to several factors that include a lack of training in objective research assessment, greater leniency and tolerance of poor-quality science, and an unwillingness by top researchers to participate because they prefer to give their time to the selective journals. This creates an awkward situation: on the one hand, the inability to properly assess submissions, and on the other hand, a lack of motivation to do so.Notwithstanding these difficulties, most editors and authors of national journals hope that their publications will ultimately be recognized as visible, reliable sources of information, and not only as instruments to communicate national research to the public. In other words, their aspiration is not only to publish good science—albeit of lesser interest to international journals—but also to attain the second or third quartiles of impact factors in their areas. These journals should eventually be good enough to compete with the international ones, mitigating their national character and attracting authors from other countries.The key is to raise the assessment procedures at national journals to international standards, and to professionalize their operations. Both goals are interdependent. The vast majority of national journals are published by societies and research organizations and their editorial structures are often limited to local researchers. As a result, they are shoestring operations that lack proper administrative support and international input, and can come across as amateurish. The SciELO (Scientific Electronic Library Online), which indexes national journals and measures their quality, can require certain changes when it indexes a journal, including the requirement to internationalize the editorial body or board.…experienced international editors should be brought in to strengthen national journals, raise their quality and educate local editors…In terms of improving this status quo, a range of other changes could be introduced. First, more decision-making authority should be given to publishers to decide how to structure the editorial body. The choice of ad hoc assistants—that is, professional scientists who can lend expertise at the editorial level should be selected by the editors—who should also assess journal performance. Moreover, publishers should try to attract international scientists with editorial experience to join a core group of two or three chief or senior editors. Their English skills, their experience in their research field and their influence in the community would catalyse a rapid improvement of the journals and their quality. In other words, experienced international editors should be brought in to strengthen national journals, raise their quality and educate local editors with the long-term objective to join the international scientific editing community. It would eventually merge the national and the selective routes of publishing into a single international route of scientific communication.Of course, there is a long way to go. The problem is that many societies and organizations do not have sufficient resources—money or experience—to attract international scientists as editors. However, new publishing and financial models could provide incentives to attract this kind of expertise. Ultimately, relying on government money alone is neither a reliable nor sufficient source of income to make national journals successful. One way of enhancing revenue streams might be to switch to an open-access model that would charge author fees that could be reinvested to improve the journals. In Brazil, for instance, almost all journals have adopted the open access model (Hedlund et al, 2004). The author fees—around US$1,250—if adopted, would provide financial support for increasing the quality and performance of the journals. Moreover, increased competition between journals at a national level should create a more dynamic and competitive situation among journals, raising the general quality of the science they publish. This would also feed back to the scientific community and help to raise the general standards of science in emerging countries.  相似文献   

3.
The publishing of research has implications for the evaluation of research careers, research departments, and funding for research projects. Researchers' academic evaluation relies heavily on the status of the journals in which they publish. The inclusion of one's work in the Science Citation Index (SCI) and the Social Science Citation Index (SSCI) is often used as an indicator of academic quality. This is unfortunate for many environmental researchers, as their journals are not represented in the SCI and SSCI. Two investigations were carried out to determine the reasons for this. The first investigation identified 352 existing environmental academic journals, classified into seven categories (and several subcategories). Of these, two categories were not represented in the SCI or SSCI: environmental systems analysis journals and corporate environmental management journals. The second survey investigated the publishing patterns of interdisciplinary research groups and the characteristics of the journals in which they publish. In spite of acceptable citation levels, interdisciplinary environmental journals are excluded from the SCI and SSCI. A major reason seems to be that citations of their articles are uncounted by the Institute for Scientific Information (ISI), the organization producing the SCI and SSCI, because citations mostly take place in a group of journals completely unrepresented in ISI's database.  相似文献   

4.
5.
Both significant positive and negative relationships between the magnitude of research findings (their 'effect size') and their year of publication have been reported in a few areas of biology. These trends have been attributed to Kuhnian paradigm shifts, scientific fads and bias in the choice of study systems. Here we test whether or not these isolated cases reflect a more general trend. We examined the relationship using effect sizes extracted from 44 peer-reviewed meta-analyses covering a wide range of topics in ecological and evolutionary biology. On average, there was a small but significant decline in effect size with year of publication. For the original empirical studies there was also a significant decrease in effect size as sample size increased. However, the effect of year of publication remained even after we controlled for sampling effort. Although these results have several possible explanations, it is suggested that a publication bias against non-significant or weaker findings offers the most parsimonious explanation. As in the medical sciences, non-significant results may take longer to publish and studies with both small sample sizes and non-significant results may be less likely to be published.  相似文献   

6.
Principal component (PCA) and factor analysis (FA) are widely used in animal behaviour research. However, many authors automatically follow questionable practices implemented by default in general‐purpose statistical software. Worse still, the results of such analyses in research reports typically omit many crucial details which may hamper their evaluation. This article provides simple non‐technical guidelines for PCA and FA. A standard for reporting the results of these analyses is suggested. Studies using PCA and FA must report: (1) whether the correlation or covariance matrix was used; (2) sample size, preferably as a footnote to the table of factor loadings; (3) indices of sampling adequacy; (4) how the number of factors was assessed; (5) communalities when sample size is small; (6) details of factor rotation; (7) if factor scores are computed, present determinacy indices; (8) preferably they should publish the original correlation matrix.  相似文献   

7.
Disentangling the sources of variation in developing an effective immune response against pathogens is of major interest to immunoecology and evolutionary biology. To date, the link between immunocompetence and genetic variation at the major histocompatibility complex (MHC) has received little attention in wild animals, despite the key role of MHC genes in activating the adaptive immune system. Although several studies point to a link between MHC and immunocompetence, negative findings have also been reported. Such disparate findings suggest that limited statistical power might be affecting studies on this topic, owing to insufficient sample sizes and/or a generally small effect of MHC on the immunocompetence of wild vertebrates. To clarify this issue, we investigated the link between MHC variation and seven immunocompetence proxies in a large sample of barn owls and estimated the effect sizes and statistical power of this and published studies on this topic. We found that MHC poorly explained variation in immunocompetence of barn owls, with small‐to‐moderate associations between MHC and immunocompetence in owls (effect size: .1 ≥ r ≤ .3) similar to other vertebrates studied to date. Such small‐to‐moderate effects were largely associated with insufficient power, which was only sufficient (>0.8) to detect moderate‐to‐large effect sizes (r ≥ .3). Thus, studies linking MHC variation with immunocompetence in wild populations are underpowered to detect MHC effects, which are likely to be of generally small magnitude. Larger sample sizes (>200) will be required to achieve sufficient power in future studies aiming to robustly test for a link between MHC variation and immunocompetence.  相似文献   

8.
Genotype-imputation methods provide an essential technique for high-resolution genome-wide association (GWA) studies with millions of single-nucleotide polymorphisms. For optimal design and interpretation of imputation-based GWA studies, it is important to understand the connection between imputation error and power to detect associations at imputed markers. Here, using a 2 × 3 chi-square test, we describe a relationship between genotype-imputation error rates and the sample-size inflation required for achieving statistical power at an imputed marker equal to that obtained if genotypes at the marker were known with certainty. Surprisingly, typical imputation error rates (∼2%–6%) lead to a large increase in the required sample size (∼10%–60%), and in some African populations whose genotypes are particularly difficult to impute, the required sample-size increase is as high as ∼30%–150%. In most populations, each 1% increase in imputation error leads to an increase of ∼5%–13% in the sample size required for maintaining power. These results imply that in GWA sample-size calculations investigators will need to account for a potentially considerable loss of power from even low levels of imputation error and that development of additional genomic resources that decrease imputation error will translate into substantial reduction in the sample sizes needed for imputation-based detection of the variants that underlie complex human diseases.  相似文献   

9.
10.
11.
12.
Colin M Beale 《Ostrich》2018,89(2):99-108
Ornithology in Africa has a long history. I review trends in the ornithological literature since 1990 within the context of the 14th Pan-African Ornithological Congress. Using full text searches of papers on PubMed® and abstracts from main ornithological journals I found that most papers referencing African bird species are focused on medical-related research questions. Restricting the literature search to journals African ornithologists are most likely to publish in, I found 2 279 relevant papers. These describe work on 29% of African bird species from 82% of African bird families, in all but two African countries. Overall output has increased slightly over time, with more papers tackling more research topics. Most popular research topics were demography, conservation and climate, with disease ecology, physiology and ecological processes the least researched topics. I found that while many authors with African affiliations publish papers, outside of South Africa very few African-based authors reliably publish in the international research literature, perhaps indicating difficulties in establishing a productive research career in much of Africa. I conclude with a call to overseas ornithologists working in Africa and to organisations funding research in Africa to work together to build capacity outside of the few established research centres.  相似文献   

13.
The choice of an appropriate sample size for a study is a notoriously neglected topic in behavioural research, even though it is of utmost importance and the rules of action are more than clear – or are they? They may be clear if a formal power analysis is concerned. However, with the educated guesswork usually applied in behavioural studies there are various trade‐offs, and the degrees of freedom are extensive. An analysis of 119 original studies haphazardly chosen from five leading behavioural journals suggests that the selected sample size reflects an influence of constraints more often than a rational optimization process. As predicted, field work involves greater samples than studies conducted in captivity, and invertebrates are used in greater numbers than vertebrates when the approach is similar. However, it seems to be less important for determining the number of subjects if the study employs observational or experimental means. This is surprising because in contrast to mere observations, experiments allow to reduce random variation in the data, which is an essential precondition for economizing on sample size. By pointing to inconsistent patterns the intention of this article is to induce thought and discussion among behavioural researchers on this crucial issue, where apparently neither standard procedures are applied nor conventions have yet been established. This is an issue of concern for authors, referees and editors alike.  相似文献   

14.
15.
16.
Recent reviews of specific topics, such as the relationship between male attractiveness to females and fluctuating asymmetry or attractiveness and the expression of secondary sexual characters, suggest that publication bias might be a problem in ecology and evolution. In these cases, there is a significant negative correlation between the sample size of published studies and the magnitude or strength of the research findings (formally the ‘effect size’). If all studies that are conducted are equally likely to be published, irrespective of their findings, there should not be a directional relationship between effect size and sample size; only a decrease in the variance in effect size as sample size increases due to a reduction in sampling error. One interpretation of these reports of negative correlations is that studies with small sample sizes and weaker findings (smaller effect sizes) are less likely to be published. If the biological literature is systematically biased this could undermine the attempts of reviewers to summarise actual biology relationships by inflating estimates of average effect sizes. But how common is this problem? And does it really effect the general conclusions of literature reviews? Here, we examine data sets of effect sizes extracted from 40 peer‐reviewed, published meta‐analyses. We estimate how many studies are missing using the newly developed ‘trim and fill’ method. This method uses asymmetry in plots of effect size against sample size (‘funnel plots’) to detect ‘missing’ studies. For random‐effect models of meta‐analysis 38% (15/40) of data sets had a significant number of ‘missing’ studies. After correcting for potential publication bias, 21% (8/38) of weighted mean effects were no longer significantly greater than zero, and 15% (5/34) were no longer statistically robust when we used random‐effects models in a weighted meta‐analysis. The mean correlation between sample size and the magnitude of standardised effect size was also significantly negative (rs=‐0.20, P < 0‐0001). Individual correlations were significantly negative (P < 0.10) in 35% (14/40) of cases. Publication bias may therefore effect the main conclusions of at least 15–21% of meta‐analyses. We suggest that future literature reviews assess the robustness of their main conclusions by correcting for potential publication bias using the ‘trim and fill’ method.  相似文献   

17.
Although phylogenetic hypotheses can provide insights into mechanisms of evolution, their utility is limited by our inability to differentiate simultaneous speciation events (hard polytomies) from rapid cladogenesis (soft polytomies). In the present paper, we tested the potential for statistical power analysis to differentiate between hard and soft polytomies in molecular phytogenies. Classical power analysis typically is used a priori to determine the sample size required to detect a particular effect size at a particular level of significance (a) with a certain power (1 – β). A posteriori, power analysis is used to infer whether failure to reject a null hypothesis results from lack of an effect or from insufficient data (i.e., low power). We adapted this approach to molecular data to infer whether polytomies result from simultaneous branching events or from insufficient sequence information. We then used this approach to determine the amount of sequence data (sample size) required to detect a positive branch length (effect size). A worked example is provided based on the auklets (Charadriiformes: Alcidae), a group of seabirds among which relationships are represented by a polytomy, despite analyses of over 3000 bp of sequence data. We demonstrate the calculation of effect sizes and sample sizes from sequence data using a normal curve test for difference of a proportion from an expected value and a t-test for a difference of a mean from an expected value. Power analyses indicated that the data for the auklets should be sufficient to differentiate speciation events that occurred at least 100,000 yr apart (the duration of the shortest glacial and interglacial events of the Pleistocene), 2.6 million years ago.  相似文献   

18.
Many medical and biological studies entail classifying a number of observations according to two factors, where one has two and the other three possible categories. This is the case of, for example, genetic association studies of complex traits with single-nucleotide polymorphisms (SNPs), where the a priori statistical planning, analysis, and interpretation of results are of critical importance. Here, we present methodology to determine the minimum sample size required to detect dependence in 2 x 3 tables based on Fisher's exact test, assuming that neither of the two margins is fixed and only the grand total N is known in advance. We provide the numerical tools necessary to determine these sample sizes for desired power, significance level, and effect size, where only the computational time can be a limitation for extreme parameter values. These programs can be accessed at . This solution of the sample size problem for an exact test will permit experimentalists to plan efficient sampling designs, determine the extent of statistical support for their hypotheses, and gain insight into the repeatability of their results. We apply this solution to the sample size problem to three empirical studies, and discuss the results with specified power and nominal significance levels.  相似文献   

19.
Single nucleotide polymorphisms (SNPs) have been proposed by some as the new frontier for population studies, and several papers have presented theoretical and empirical evidence reporting the advantages and limitations of SNPs. As a practical matter, however, it remains unclear how many SNP markers will be required or what the optimal characteristics of those markers should be in order to obtain sufficient statistical power to detect different levels of population differentiation. We use a hypothetical case to illustrate the process of designing a population genetics project, and present results from simulations that address several issues for maximizing statistical power to detect differentiation while minimizing the amount of effort in developing SNPs. Results indicate that (i) while ~30 SNPs should be sufficient to detect moderate (FST = 0.01) levels of differentiation, studies aimed at detecting demographic independence (e.g. FST < 0.005) may require 80 or more SNPs and large sample sizes; (ii) different SNP allele frequencies have little affect on power, and thus, selection of SNPs can be relatively unbiased; (iii) increasing the sample size has a strong effect on power, so that the number of loci can be minimized when sample number is known, and increasing sample size is almost always beneficial; and (iv) power is increased by including multiple SNPs within loci and inferring haplotypes, rather than trying to use only unlinked SNPs. This also has the practical benefit of reducing the SNP ascertainment effort, and may influence the decision of whether to seek SNPs in coding or noncoding regions.  相似文献   

20.
Microsatellite measures of inbreeding: a meta-analysis   总被引:17,自引:0,他引:17  
Abstract Meta-analyses of published and unpublished correlations between phenotypic variation and two measures of genetic variation at microsatellite loci, multilocus heterozygosity (MLH) and mean d2, revealed that the strength of these associations are generally weak (mean r < 0.10). Effects on life-history trait variation were significantly greater than zero for both measures over all reported effect sizes ( r = 0. 0856 and 0.0479 for MLH and mean d 2, respectively), whereas effects on morphometric traits were not ( r = 0.0052 and r = 0.0038), which is consistent with the prediction that life-history traits exhibit greater inbreeding depression than morphometric traits. Effect sizes reported using mean d 2 were smaller and more variable than those reported using MLH, suggesting that MLH may be a better metric for capturing inbreeding depression most of the time. However, analyses of paired effect sizes reported using both measures from the same data did not differ significantly. Several lines of evidence suggest that published effects sizes are upwardly biased. First, effect sizes from published studies were significantly higher than those reported in unpublished studies. Second, fail-safe numbers for reported effect sizes were generally quite low, with the exception of correlations between MLH and life-history traits. Finally, the slope of the regression of effect size on sample size was negative for most sets of traits. Taken together, these results suggest that studies designed to detect inbreeding depression on a life-history trait using microsatellites will need to sample in excess of 600 individuals to detect an average effect size ( r = 0.10) with reasonable statistical power (0.80). Very few published studies have used samples sizes approaching this value.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号