首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In microarray studies it is common that the number of replications (i.e. the sample size) is small and that the distribution of expression values differs from normality. In this situation, permutation and bootstrap tests may be appropriate for the identification of differentially expressed genes. However, unlike bootstrap tests, permutation tests are not suitable for very small sample sizes, such as three per group. A variety of different bootstrap tests exists. For example, it is possible to adjust the data to have a common mean before the bootstrap samples are drawn. For small significance levels, which can occur when a large number of genes is investigated, the original bootstrap test, as well as a bootstrap test suggested for the Behrens-Fisher problem, have no power in cases of very small sample sizes. In contrast, the modified test based on adjusted data is powerful. Using a Monte Carlo simulation study, we demonstrate that the difference in power can be huge. In addition, the different tests are illustrated using microarray data.  相似文献   

2.
Aim Techniques that predict species potential distributions by combining observed occurrence records with environmental variables show much potential for application across a range of biogeographical analyses. Some of the most promising applications relate to species for which occurrence records are scarce, due to cryptic habits, locally restricted distributions or low sampling effort. However, the minimum sample sizes required to yield useful predictions remain difficult to determine. Here we developed and tested a novel jackknife validation approach to assess the ability to predict species occurrence when fewer than 25 occurrence records are available. Location Madagascar. Methods Models were developed and evaluated for 13 species of secretive leaf‐tailed geckos (Uroplatus spp.) that are endemic to Madagascar, for which available sample sizes range from 4 to 23 occurrence localities (at 1 km2 grid resolution). Predictions were based on 20 environmental data layers and were generated using two modelling approaches: a method based on the principle of maximum entropy (Maxent) and a genetic algorithm (GARP). Results We found high success rates and statistical significance in jackknife tests with sample sizes as low as five when the Maxent model was applied. Results for GARP at very low sample sizes (less than c. 10) were less good. When sample sizes were experimentally reduced for those species with the most records, variability among predictions using different combinations of localities demonstrated that models were greatly influenced by exactly which observations were included. Main conclusions We emphasize that models developed using this approach with small sample sizes should be interpreted as identifying regions that have similar environmental conditions to where the species is known to occur, and not as predicting actual limits to the range of a species. The jackknife validation approach proposed here enables assessment of the predictive ability of models built using very small sample sizes, although use of this test with larger sample sizes may lead to overoptimistic estimates of predictive power. Our analyses demonstrate that geographical predictions developed from small numbers of occurrence records may be of great value, for example in targeting field surveys to accelerate the discovery of unknown populations and species.  相似文献   

3.
Species distribution models are used for a range of ecological and evolutionary questions, but often are constructed from few and/or biased species occurrence records. Recent work has shown that the presence‐only model Maxent performs well with small sample sizes. While the apparent accuracy of such models with small samples has been studied, less emphasis has been placed on the effect of small or biased species records on the secondary modeling steps, specifically accuracy assessment and threshold selection, particularly with profile (presence‐only) modeling techniques. When testing the effects of small sample sizes on distribution models, accuracy assessment has generally been conducted with complete species occurrence data, rather than similarly limited (e.g. few or biased) test data. Likewise, selection of a probability threshold – a selection of probability that classifies a model into discrete areas of presences and absences – has also generally been conducted with complete data. In this study we subsampled distribution data for an endangered rodent across multiple years to assess the effects of different sample sizes and types of bias on threshold selection, and examine the differences between apparent and actual accuracy of the models. Although some previously recommended threshold selection techniques showed little difference in threshold selection, the most commonly used methods performed poorly. Apparent model accuracy calculated from limited data was much higher than true model accuracy, but the true model accuracy was lower than it could have been with a more optimal threshold. That is, models with thresholds and accuracy calculated from biased and limited data had inflated reported accuracy, but were less accurate than they could have been if better data on species distribution were available and an optimal threshold were used.  相似文献   

4.
Identifying genes that direct the mechanism of a disease from expression data is extremely useful in understanding how that mechanism works. This in turn may lead to better diagnoses and potentially could lead to a cure for that disease. This task becomes extremely challenging when the data are characterised by only a small number of samples and a high number of dimensions, as is often the case with gene expression data. Motivated by this challenge, we present a general framework that focuses on simplicity and data perturbation. These are the keys for robust identification of the most predictive features in such data. Within this framework, we propose a simple selective naive Bayes classifier discovered using a global search technique, and combine it with data perturbation to increase its robustness for small sample sizes.An extensive validation of the method was carried out using two applied datasets from the field of microarrays and a simulated dataset, all confounded by small sample sizes and high dimensionality. The method has been shown to be capable of selecting genes known to be associated with prostate cancer and viral infections.  相似文献   

5.
AG Nazareno  AS Jump 《Molecular ecology》2012,21(12):2847-9; discussion 2850-1
Predicted parallel impacts of habitat fragmentation on genes and species lie at the core of conservation biology, yet tests of this rule are rare. In a recent article in Ecology Letters, Struebig et al. (2011) report that declining genetic diversity accompanies declining species diversity in tropical forest fragments. However, this study estimates diversity in many populations through extrapolation from very small sample sizes. Using the data of this recent work, we show that results estimated from the smallest sample sizes drive the species-genetic diversity correlation (SGDC), owing to a false-positive association between habitat fragmentation and loss of genetic diversity. Small sample sizes are a persistent problem in habitat fragmentation studies, the results of which often do not fit simple theoretical models. It is essential, therefore, that data assessing the proposed SGDC are sufficient in order that conclusions be robust.  相似文献   

6.
The extent of microbial diversity is an intrinsically fascinating subject of profound practical importance. The term 'diversity' may allude to the number of taxa or species richness as well as their relative abundance. There is uncertainty about both, primarily because sample sizes are too small. Non-parametric diversity estimators make gross underestimates if used with small sample sizes on unevenly distributed communities. One can make richness estimates over many scales using small samples by assuming a species/taxa-abundance distribution. However, no one knows what the underlying taxa-abundance distributions are for bacterial communities. Latterly, diversity has been estimated by fitting data from gene clone libraries and extrapolating from this to taxa-abundance curves to estimate richness. However, since sample sizes are small, we cannot be sure that such samples are representative of the community from which they were drawn. It is however possible to formulate, and calibrate, models that predict the diversity of local communities and of samples drawn from that local community. The calibration of such models suggests that migration rates are small and decrease as the community gets larger. The preliminary predictions of the model are qualitatively consistent with the patterns seen in clone libraries in 'real life'. The validation of this model is also confounded by small sample sizes. However, if such models were properly validated, they could form invaluable tools for the prediction of microbial diversity and a basis for the systematic exploration of microbial diversity on the planet.  相似文献   

7.
The 2×2 contingency table is a common analytical method for wildlife studies, but inappropriate analyses and inferences are not uncommon. Issues of concern are presented for selecting the appropriate test for analyzing these data sets. These include the choice of test relative to experimental or sampling design and breadth of intended inferences, the careful statement of hypotheses, and analyses with small sample sizes. Examples from the wildlife literature are used to reinforce the statistical concepts.  相似文献   

8.
Parasite prevalence (the proportion of infected hosts) is a common measure used to describe parasitaemias and to unravel ecological and evolutionary factors that influence host-parasite relationships. Prevalence estimates are often based on small sample sizes because of either low abundance of the hosts or logistical problems associated with their capture or laboratory analysis. Because the accuracy of prevalence estimates is lower with small sample sizes, addressing sample size has been a common problem when dealing with prevalence data. Different methods are currently being applied to overcome this statistical challenge, but far from being different correct ways of solving a same problem, some are clearly wrong, and others need improvement.  相似文献   

9.
We consider the problem of testing for heterogeneity of K proportions when K is not small and the binomial sample sizes may not be large. We assume that the binomial proportions are normally distributed with variance σ2. The asymptotic relative efficiency (ARE) of the usual chi-square test is found relative to the likelihood-based tests for σ2=0. The chi-square test is found to have ARE = 1 when the binomial sample sizes are all equal and high relative efficiency for other cases. The efficiency is low only in cases where there is insufficient data to use the chi-square test.  相似文献   

10.
Several asymptotic tests were proposed for testing the null hypothesis of marginal homogeneity in square contingency tables with r categories. A simulation study was performed for comparing the power of four finite conservative conditional test procedures and of two asymptotic tests for twelve different contingency schemes for small sample sizes. While an asymptotic test proposed by STUART (1955) showed a rather satisfactory behaviour for moderate sample sizes, an asymptotic test proposed by BHAPKAR (1966) was quite anticonservative. With no a priori information the performance of (r - 1) simultaneous conditional binomial tests with a Bonferroni adjustment proved to be a quite efficient procedure. With assumptions about where to expect the deviations from the null hypothesis, other procedures favouring the larger or smaller conditional sample sizes, respectively, can have a great efficiency. The procedures are illustrated by means of a numerical example from clinical psychology.  相似文献   

11.
An exact test of the Hardy-Weinberg law.   总被引:4,自引:0,他引:4  
W Chapco 《Biometrics》1976,32(1):183-189
An exact distribution of a finite sample drawn from an infinite population in Hardy-Weinberg Equilibrium is described for k-alleles. Accordingly, an exact test of the law is presented and compared with two x2-tests for two and three alleles. For two alleles, it is shown that the "classical" c2-test is very adequate for sample sizes as small as ten. For three alleles, it is shown that a simpler formulation based on Leven's distribution approximates the exact test of this paper rather closely. However, it is recommended that researchers continue to employ the standard x2-test for all sample sizes and abide by it if the corresponding probability value is not "too close" to the critical level; otherwise, an exact test should be used.  相似文献   

12.
Effects of sample size on the performance of species distribution models   总被引:8,自引:0,他引:8  
A wide range of modelling algorithms is used by ecologists, conservation practitioners, and others to predict species ranges from point locality data. Unfortunately, the amount of data available is limited for many taxa and regions, making it essential to quantify the sensitivity of these algorithms to sample size. This is the first study to address this need by rigorously evaluating a broad suite of algorithms with independent presence–absence data from multiple species and regions. We evaluated predictions from 12 algorithms for 46 species (from six different regions of the world) at three sample sizes (100, 30, and 10 records). We used data from natural history collections to run the models, and evaluated the quality of model predictions with area under the receiver operating characteristic curve (AUC). With decreasing sample size, model accuracy decreased and variability increased across species and between models. Novel modelling methods that incorporate both interactions between predictor variables and complex response shapes (i.e. GBM, MARS-INT, BRUTO) performed better than most methods at large sample sizes but not at the smallest sample sizes. Other algorithms were much less sensitive to sample size, including an algorithm based on maximum entropy (MAXENT) that had among the best predictive power across all sample sizes. Relative to other algorithms, a distance metric algorithm (DOMAIN) and a genetic algorithm (OM-GARP) had intermediate performance at the largest sample size and among the best performance at the lowest sample size. No algorithm predicted consistently well with small sample size ( n  < 30) and this should encourage highly conservative use of predictions based on small sample size and restrict their use to exploratory modelling.  相似文献   

13.
Radiologists' interpretation on screening mammograms is measured by accuracy indices such as sensitivity and specificity. The hypothesis that radiologists' interpretation on screening mammograms is constant across time can be tested by measuring overdispersion. However, small sample sizes are problematic for the accuracy of asymptotic approaches. In this article, we propose an exact conditional distribution for testing overdispersion of the binomial assumption that is assumed for the accuracy indices. An exact p -value can be defined from the developed distribution. We also describe an algorithm for computing this exact test. This proposed method is applied to data from a study in reading screening mammograms in a population of US radiologists (Beam et al., 2003). The exact method is compared analytically with a currently available method based on large sample approximations.  相似文献   

14.
We consider profile-likelihood inference based on the multinomial distribution for assessing the accuracy of a diagnostic test. The methods apply to ordinal rating data when accuracy is assessed using the area under the receiver operating characteristic (ROC) curve. Simulation results suggest that the derived confidence intervals have acceptable coverage probabilities, even when sample sizes are small and the diagnostic tests have high accuracies. The methods extend to stratified settings and situations in which the ratings are correlated. We illustrate the methods using data from a clinical trial on the detection of ovarian cancer.  相似文献   

15.
Jain et al. introduced the Local Pooled Error (LPE) statistical test designed for use with small sample size microarray gene-expression data. Based on an asymptotic proof, the test multiplicatively adjusts the standard error for a test of differences between two classes of observations by pi/2 due to the use of medians rather than means as measures of central tendency. The adjustment is upwardly biased at small sample sizes, however, producing fewer than expected small P-values with a consequent loss of statistical power. We present an empirical correction to the adjustment factor which removes the bias and produces theoretically expected P-values when distributional assumptions are met. Our adjusted LPE measure should prove useful to ongoing methodological studies designed to improve the LPE's; performance for microarray and proteomics applications and for future work for other high-throughput biotechnologies. AVAILABILITY: The software is implemented in the R language and can be downloaded from the Bioconductor project website (http://www.bioconductor.org).  相似文献   

16.
17.
The use of small sample sizes in human and primate evolutionary research is commonplace. Estimating how well small samples represent the underlying population, however, is not commonplace. Because the accuracy of determinations of taxonomy, phylogeny, and evolutionary process are dependant upon how well the study sample represents the population of interest, characterizing the uncertainty, or potential error, associated with analyses of small sample sizes is essential. We present a method for estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean using small (n < 10) or very small (n ≤ 5) sample sizes. This method can be used by researchers to determine post hoc the probability that their sample is a meaningful approximation of the population parameter. We tested the method using a large craniometric data set commonly used by researchers in the field. Given our results, we suggest that sample estimates of the population mean can be reasonable and meaningful even when based on small, and perhaps even very small, sample sizes.  相似文献   

18.
Clinical trials with Poisson distributed count data as the primary outcome are common in various medical areas such as relapse counts in multiple sclerosis trials or the number of attacks in trials for the treatment of migraine. In this article, we present approximate sample size formulae for testing noninferiority using asymptotic tests which are based on restricted or unrestricted maximum likelihood estimators of the Poisson rates. The Poisson outcomes are allowed to be observed for unequal follow‐up schemes, and both the situations that the noninferiority margin is expressed in terms of the difference and the ratio are considered. The exact type I error rates and powers of these tests are evaluated and the accuracy of the approximate sample size formulae is examined. The test statistic using the restricted maximum likelihood estimators (for the difference test problem) and the test statistic that is based on the logarithmic transformation and employs the maximum likelihood estimators (for the ratio test problem) show favorable type I error control and can be recommended for practical application. The approximate sample size formulae show high accuracy even for small sample sizes and provide power values identical or close to the aspired ones. The methods are illustrated by a clinical trial example from anesthesia.  相似文献   

19.
The classical group sequential test procedures that were proposed by Pocock (1977) and O'Brien and Fleming (1979) rest on the assumption of equal sample sizes between the interim analyses. Regarding this it is well known that for most situations there is not a great amount of additional Type I error if monitoring is performed for unequal sample sizes between the stages. In some cases, however, problems can arise resulting in an unacceptable liberal behavior of the test procedure. In this article worst case scenarios in sample size imbalancements between the inspection times are considered. Exact critical values for the Pocock and the O'Brien and Fleming group sequential designs are derived for arbitrary and for varying but bounded sample sizes. The approach represents a reasonable alternative to the flexible method that is based on the Type I error rate spending function. The SAS syntax for performing the calculations is provided. Using these procedures, the inspection times or the sample sizes in the consecutive stages need to be chosen independently of the data observed so far.  相似文献   

20.
Gill PS 《Biometrics》2004,60(2):525-527
We propose a likelihood-based test for comparing the means of two or more log-normal distributions, with possibly unequal variances. A modification to the likelihood ratio test is needed when sample sizes are small. The performance of the proposed procedures is compared with the F-ratio test using Monte Carlo simulations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号