首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Zho XH  Tu W 《Biometrics》1999,55(2):645-651
In this paper, we consider the problem of testing the mean equality of several independent populations that contain log-normal and possibly zero observations. We first showed that the currently used methods in statistical practice, including the nonparametric Kruskal-Wallis test, the standard ANOVA F-test and its two modified versions, the Welch test and the Brown-Forsythe test, could have poor Type I error control. Then we propose a likelihood ratio test that is shown to have much better Type I error control than the existing methods. Finally, we analyze two real data sets that motivated our study using the proposed test.  相似文献   

2.
Investigation of microbial communities, particularly human associated communities, is significantly enhanced by the vast amounts of sequence data produced by high throughput sequencing technologies. However, these data create high-dimensional complex data sets that consist of a large proportion of zeros, non-negative skewed counts, and frequently, limited number of samples. These features distinguish sequence data from other forms of high-dimensional data, and are not adequately addressed by statistical approaches in common use. Ultimately, medical studies may identify targeted interventions or treatments, but lack of analytic tools for feature selection and identification of taxa responsible for differences between groups, is hindering advancement. The objective of this paper is to examine the application of a two-part statistic to identify taxa that differ between two groups. The advantages of the two-part statistic over common statistical tests applied to sequence count datasets are discussed. Results from the t-test, the Wilcoxon test, and the two-part test are compared using sequence counts from microbial ecology studies in cystic fibrosis and from cenote samples. We show superior performance of the two-part statistic for analysis of sequence data. The improved performance in microbial ecology studies was independent of study type and sequence technology used.  相似文献   

3.
The unbiased estimation of fluctuating asymmetry (FA) requires independent repeated measurements on both sides. The statistical analysis of such data is currently performed by a two-way mixed ANOVA analysis. Although this approach produces unbiased estimates of FA, many studies do not utilize this method. This may be attributed in part to the fact that the complete analysis of FA is very cumbersome and cannot be performed automatically with standard statistical software. Therefore, further elaboration of the statistical tools to analyse FA should focus on the usefulness of the method, in order for the correct statistical approaches to be applied more regularly. In this paper we propose a mixed regression model with restricted maximum likelihood (REML) parameter estimation to model FA. This routine yields exactly the same estimates of FA as the two-way mixed ANOVA . Yet the advantages of this approach are that it allows (a) testing the statistical significance of FA, (b) modelling and testing heterogeneity in both FA and measurement error (ME) among samples, (c) testing for nonzero directional asymmetry and (d) obtaining unbiased estimates of individual FA levels. The switch from a mixed two-way ANOVA to a mixed regression model was made to avoid overparametrization. Two simulation studies are presented. The first shows that a previously proposed method to test the significance of FA is incorrect, contrary to our mixed regression approach. In the second simulation study we show that a traditionally applied measure of individual FA [abs(left – right)] is biased by ME. The proposed mixed regression method, however, produces unbiased estimates of individual FA after modelling heterogeneity in ME. The applicability of this method is illustrated with two analyses.  相似文献   

4.
Yize Zhao  Ben Wu  Jian Kang 《Biometrics》2023,79(2):655-668
Multimodality or multiconstruct data arise increasingly in functional neuroimaging studies to characterize brain activity under different cognitive states. Relying on those high-resolution imaging collections, it is of great interest to identify predictive imaging markers and intermodality interactions with respect to behavior outcomes. Currently, most of the existing variable selection models do not consider predictive effects from interactions, and the desired higher-order terms can only be included in the predictive mechanism following a two-step procedure, suffering from potential misspecification. In this paper, we propose a unified Bayesian prior model to simultaneously identify main effect features and intermodality interactions within the same inference platform in the presence of high-dimensional data. To accommodate the brain topological information and correlation between modalities, our prior is designed by compiling the intermediate selection status of sequential partitions in light of the data structure and brain anatomical architecture, so that we can improve posterior inference and enhance biological plausibility. Through extensive simulations, we show the superiority of our approach in main and interaction effects selection, and prediction under multimodality data. Applying the method to the Adolescent Brain Cognitive Development (ABCD) study, we characterize the brain functional underpinnings with respect to general cognitive ability under different memory load conditions.  相似文献   

5.
MOTIVATION: Microarray technology allows the monitoring of expression levels for thousands of genes simultaneously. In time-course experiments in which gene expression is monitored over time, we are interested in testing gene expression profiles for different experimental groups. However, no sophisticated analytic methods have yet been proposed to handle time-course experiment data. RESULTS: We propose a statistical test procedure based on the ANOVA model to identify genes that have different gene expression profiles among experimental groups in time-course experiments. Especially, we propose a permutation test which does not require the normality assumption. For this test, we use residuals from the ANOVA model only with time-effects. Using this test, we detect genes that have different gene expression profiles among experimental groups. The proposed model is illustrated using cDNA microarrays of 3840 genes obtained in an experiment to search for changes in gene expression profiles during neuronal differentiation of cortical stem cells.  相似文献   

6.
Pan W  Basu S  Shen X 《Human heredity》2011,72(2):98-109
There has been an increasing interest in detecting gene-gene and gene-environment interactions in genetic association studies. A major statistical challenge is how to deal with a large number of parameters measuring possible interaction effects, which leads to reduced power of any statistical test due to a large number of degrees of freedom or high cost of adjustment for multiple testing. Hence, a popular idea is to first apply some dimension reduction techniques before testing, while another is to apply only statistical tests that are developed for and robust to high-dimensional data. To combine both ideas, we propose applying an adaptive sum of squared score (SSU) test and several other adaptive tests. These adaptive tests are extensions of the adaptive Neyman test [Fan, 1996], which was originally proposed for high-dimensional data, providing a simple and effective way for dimension reduction. On the other hand, the original SSU test coincides with a version of a test specifically developed for high-dimensional data. We apply these adaptive tests and their original nonadaptive versions to simulated data to detect interactions between two groups of SNPs (e.g. multiple SNPs in two candidate regions). We found that for sparse models (i.e. with only few non-zero interaction parameters), the adaptive SSU test and its close variant, an adaptive version of the weighted sum of squared score (SSUw) test, improved the power over their non-adaptive versions, and performed consistently well across various scenarios. The proposed adaptive tests are built in the general framework of regression analysis, and can thus be applied to various types of traits in the presence of covariates.  相似文献   

7.

Background  

We consider effects of dependence among variables of high-dimensional data in multiple hypothesis testing problems, in particular the False Discovery Rate (FDR) control procedures. Recent simulation studies consider only simple correlation structures among variables, which is hardly inspired by real data features. Our aim is to systematically study effects of several network features like sparsity and correlation strength by imposing dependence structures among variables using random correlation matrices.  相似文献   

8.
Zhao W  Li H  Hou W  Wu R 《Genetics》2007,176(3):1879-1892
The biological and statistical advantages of functional mapping result from joint modeling of the mean-covariance structures for developmental trajectories of a complex trait measured at a series of time points. While an increased number of time points can better describe the dynamic pattern of trait development, significant difficulties in performing functional mapping arise from prohibitive computational times required as well as from modeling the structure of a high-dimensional covariance matrix. In this article, we develop a statistical model for functional mapping of quantitative trait loci (QTL) that govern the developmental process of a quantitative trait on the basis of wavelet dimension reduction. By breaking an original signal down into a spectrum by taking its averages (smooth coefficients) and differences (detail coefficients), we used the discrete Haar wavelet shrinkage technique to transform an inherently high-dimensional biological problem into its tractable low-dimensional representation within the framework of functional mapping constructed by a Gaussian mixture model. Unlike conventional nonparametric modeling of wavelet shrinkage, we incorporate mathematical aspects of developmental trajectories into the smooth coefficients used for QTL mapping, thus preserving the biological relevance of functional mapping in formulating a number of hypothesis tests at the interplay between gene actions/interactions and developmental patterns for complex phenotypes. This wavelet-based parametric functional mapping has been statistically examined and compared with full-dimensional functional mapping through simulation studies. It holds great promise as a powerful statistical tool to unravel the genetic machinery of developmental trajectories with large-scale high-dimensional data.  相似文献   

9.
BackgroundRecent development in neuroimaging and genetic testing technologies have made it possible to measure pathological features associated with Alzheimer''s disease (AD) in vivo. Mining potential molecular markers of AD from high-dimensional, multi-modal neuroimaging and omics data will provide a new basis for early diagnosis and intervention in AD. In order to discover the real pathogenic mutation and even understand the pathogenic mechanism of AD, lots of machine learning methods have been designed and successfully applied to the analysis and processing of large-scale AD biomedical data.ObjectiveTo introduce and summarize the applications and challenges of machine learning methods in Alzheimer''s disease multi-source data analysis.MethodsThe literature selected in the review is obtained from Google Scholar, PubMed, and Web of Science. The keywords of literature retrieval include Alzheimer''s disease, bioinformatics, image genetics, genome-wide association research, molecular interaction network, multi-omics data integration, and so on.ConclusionThis study comprehensively introduces machine learning-based processing techniques for AD neuroimaging data and then shows the progress of computational analysis methods in omics data, such as the genome, proteome, and so on. Subsequently, machine learning methods for AD imaging analysis are also summarized. Finally, we elaborate on the current emerging technology of multi-modal neuroimaging, multi-omics data joint analysis, and present some outstanding issues and future research directions.  相似文献   

10.
Epigenetic research leads to complex data structures. Since parametric model assumptions for the distribution of epigenetic data are hard to verify we introduce in the present work a nonparametric statistical framework for two-group comparisons. Furthermore, epigenetic analyses are often performed at various genetic loci simultaneously. Hence, in order to be able to draw valid conclusions for specific loci, an appropriate multiple testing correction is necessary. Finally, with technologies available for the simultaneous assessment of many interrelated biological parameters (such as gene arrays), statistical approaches also need to deal with a possibly unknown dependency structure in the data. Our statistical approach to the nonparametric comparison of two samples with independent multivariate observables is based on recently developed multivariate multiple permutation tests. We adapt their theory in order to cope with families of hypotheses regarding relative effects. Our results indicate that the multivariate multiple permutation test keeps the pre-assigned type I error level for the global null hypothesis. In combination with the closure principle, the family-wise error rate for the simultaneous test of the corresponding locus/parameter-specific null hypotheses can be controlled. In applications we demonstrate that group differences in epigenetic data can be detected reliably with our methodology.  相似文献   

11.
The isochore concept in human genome sequence was challenged in an analysis by the International Human Genome Sequencing Consortium (IHGSC). We argue here that a statement in IGHSC analysis concerning the existence of isochore is incorrect, because it had applied an inappropriate statistical test. To test the existence of isochores should be equivalent to a test of homogeneity of windowed GC%. The statistical test applied in the IHGSC's analysis, the binomial test, is however a test of a sequence being random on the base level. For testing the existence of isochore, or homogeneity in GC%, we propose to use another statistical test: the analysis of variance (ANOVA). It can be shown that DNA sequences that are rejected by binomial test may not be rejected by the ANOVA test.  相似文献   

12.
Nested effects models for high-dimensional phenotyping screens   总被引:2,自引:0,他引:2  
MOTIVATION: In high-dimensional phenotyping screens, a large number of cellular features is observed after perturbing genes by knockouts or RNA interference. Comprehensive analysis of perturbation effects is one of the most powerful techniques for attributing functions to genes, but not much work has been done so far to adapt statistical and computational methodology to the specific needs of large-scale and high-dimensional phenotyping screens. RESULTS: We introduce and compare probabilistic methods to efficiently infer a genetic hierarchy from the nested structure of observed perturbation effects. These hierarchies elucidate the structures of signaling pathways and regulatory networks. Our methods achieve two goals: (1) they reveal clusters of genes with highly similar phenotypic profiles, and (2) they order (clusters of) genes according to subset relationships between phenotypes. We evaluate our algorithms in the controlled setting of simulation studies and show their practical use in two experimental scenarios: (1) a data set investigating the response to microbial challenge in Drosophila melanogaster, and (2) a compendium of expression profiles of Saccharomyces cerevisiae knockout strains. We show that our methods identify biologically justified genetic hierarchies of perturbation effects. AVAILABILITY: The software used in our analysis is freely available in the R package 'nem' from www.bioconductor.org.  相似文献   

13.
Extracting features from high-dimensional data is a critically important task for pattern recognition and machine learning applications. High-dimensional data typically have much more variables than observations, and contain significant noise, missing components, or outliers. Features extracted from high-dimensional data need to be discriminative, sparse, and can capture essential characteristics of the data. In this paper, we present a way to constructing multivariate features and then classify the data into proper classes. The resulting small subset of features is nearly the best in the sense of Greenshtein's persistence; however, the estimated feature weights may be biased. We take a systematic approach for correcting the biases. We use conjugate gradient-based primal-dual interior-point techniques for large-scale problems. We apply our procedure to microarray gene analysis. The effectiveness of our method is confirmed by experimental results.  相似文献   

14.
Many of previous neuroimaging studies on neuronal structures in patients with obsessive-compulsive disorder (OCD) used univariate statistical tests on unimodal imaging measurements. Although the univariate methods revealed important aberrance of local morphometry in OCD patients, the covariance structure of the anatomical alterations remains unclear. Motivated by recent developments of multivariate techniques in the neuroimaging field, we applied a fusion method called “mCCA+jICA” on multimodal structural data of T1-weighted magnetic resonance imaging (MRI) and diffusion tensor imaging (DTI) of 30 unmedicated patients with OCD and 34 healthy controls. Amongst six highly correlated multimodal networks (p < 0.0001), we found significant alterations of the interrelated gray and white matter networks over occipital and parietal cortices, frontal interhemispheric connections and cerebella (False Discovery Rate q ≤ 0.05). In addition, we found white matter networks around basal ganglia that correlated with a subdimension of OC symptoms, namely ‘harm/checking’ (q ≤ 0.05). The present study not only agrees with the previous unimodal findings of OCD, but also quantifies the association of the altered networks across imaging modalities.  相似文献   

15.
Developments in whole genome biotechnology have stimulated statistical focus on prediction methods. We review here methodology for classifying patients into survival risk groups and for using cross-validation to evaluate such classifications. Measures of discrimination for survival risk models include separation of survival curves, time-dependent ROC curves and Harrell's concordance index. For high-dimensional data applications, however, computing these measures as re-substitution statistics on the same data used for model development results in highly biased estimates. Most developments in methodology for survival risk modeling with high-dimensional data have utilized separate test data sets for model evaluation. Cross-validation has sometimes been used for optimization of tuning parameters. In many applications, however, the data available are too limited for effective division into training and test sets and consequently authors have often either reported re-substitution statistics or analyzed their data using binary classification methods in order to utilize familiar cross-validation. In this article we have tried to indicate how to utilize cross-validation for the evaluation of survival risk models; specifically how to compute cross-validated estimates of survival distributions for predicted risk groups and how to compute cross-validated time-dependent ROC curves. We have also discussed evaluation of the statistical significance of a survival risk model and evaluation of whether high-dimensional genomic data adds predictive accuracy to a model based on standard covariates alone.  相似文献   

16.
The most widely used statistical methods for finding differentially expressed genes (DEGs) are essentially univariate. In this study, we present a new T(2) statistic for analyzing microarray data. We implemented our method using a multiple forward search (MFS) algorithm that is designed for selecting a subset of feature vectors in high-dimensional microarray datasets. The proposed T2 statistic is a corollary to that originally developed for multivariate analyses and possesses two prominent statistical properties. First, our method takes into account multidimensional structure of microarray data. The utilization of the information hidden in gene interactions allows for finding genes whose differential expressions are not marginally detectable in univariate testing methods. Second, the statistic has a close relationship to discriminant analyses for classification of gene expression patterns. Our search algorithm sequentially maximizes gene expression difference/distance between two groups of genes. Including such a set of DEGs into initial feature variables may increase the power of classification rules. We validated our method by using a spike-in HGU95 dataset from Affymetrix. The utility of the new method was demonstrated by application to the analyses of gene expression patterns in human liver cancers and breast cancers. Extensive bioinformatics analyses and cross-validation of DEGs identified in the application datasets showed the significant advantages of our new algorithm.  相似文献   

17.
This study identifies dynamical properties of maltose-binding protein (MBP) useful in unveiling active site residues susceptible to ligand binding. The described methodology has been previously used in support of novel topological techniques of persistent homology and statistical inference in complex, multi-scale, high-dimensional data often encountered in computational biophysics. Here we outline a computational protocol that is based on the anisotropic elastic network models of 14 all-atom three-dimensional protein structures. We introduce the notion of dynamical distance matrices as a measure of correlated interactions among 370 amino acid residues that constitute a single protein. The dynamical distance matrices serve as an input for a persistent homology suite of codes to further distinguish a small subset of residues with high affinity for ligand binding and allosteric activity. In addition, we show that ligand-free closed MBP structures require lower deformation energies than open MBP structures, which may be used in categorization of time-evolving molecular dynamics structures. Analysis of the most probable allosteric coupling pathways between active site residues and the protein exterior is also presented.  相似文献   

18.
Latent class models provide a useful framework for clustering observations based on several features. Application of latent class methodology to correlated, high-dimensional ordinal data poses many challenges. Unconstrained analyses may not result in an estimable model. Thus, information contained in ordinal variables may not be fully exploited by researchers. We develop a penalized latent class model to facilitate analysis of high-dimensional ordinal data. By stabilizing maximum likelihood estimation, we are able to fit an ordinal latent class model that would otherwise not be identifiable without application of strict constraints. We illustrate our methodology in a study of schwannoma, a peripheral nerve sheath tumor, that included 3 clinical subtypes and 23 ordinal histological measures.  相似文献   

19.
Thomas Burger 《Proteomics》2023,23(18):2200406
In discovery proteomics, as well as many other “omic” approaches, the possibility to test for the differential abundance of hundreds (or of thousands) of features simultaneously is appealing, despite requiring specific statistical safeguards, among which controlling for the false discovery rate (FDR) has become standard. Moreover, when more than two biological conditions or group treatments are considered, it has become customary to rely on the one-way analysis of variance (ANOVA) framework, where a first global differential abundance landscape provided by an omnibus test can be subsequently refined using various post-hoc tests (PHTs). However, the interactions between the FDR control procedures and the PHTs are complex, because both correspond to different types of multiple test corrections (MTCs). This article surveys various ways to orchestrate them in a data processing workflow and discusses their pros and cons.  相似文献   

20.
Although much theoretical work has been undertaken to derive thresholds for statistical significance in genetic linkage studies, real data are often complicated by many factors, such as missing individuals or uninformative markers, which make the validity of these theoretical results questionable. Many simulation-based methods have been proposed in the literature to determine empirically the statistical significance of the observed test statistics. However, these methods either are not generally applicable to complex pedigree structures or are too time-consuming. In this article, we propose a computationally efficient simulation procedure that is applicable to arbitrary pedigree structures. This procedure can be combined with statistical tests, to assess the statistical significance for genetic linkage between a locus and a qualitative or quantitative trait. Furthermore, the genomewide significance level can be appropriately controlled when many linked markers are studied in a genomewide scan. Simulated data and a diabetes data set are analyzed to demonstrate the usefulness of this novel simulation method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号