首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
MOTIVATION: Two important questions for the analysis of gene expression measurements from different sample classes are (1) how to classify samples and (2) how to identify meaningful gene signatures (ranked gene lists) exhibiting the differences between classes and sample subsets. Solutions to both questions have immediate biological and biomedical applications. To achieve optimal classification performance, a suitable combination of classifier and gene selection method needs to be specifically selected for a given dataset. The selected gene signatures can be unstable and the resulting classification accuracy unreliable, particularly when considering different subsets of samples. Both unstable gene signatures and overestimated classification accuracy can impair biological conclusions. METHODS: We address these two issues by repeatedly evaluating the classification performance of all models, i.e. pairwise combinations of various gene selection and classification methods, for random subsets of arrays (sampling). A model score is used to select the most appropriate model for the given dataset. Consensus gene signatures are constructed by extracting those genes frequently selected over many samplings. Sampling additionally permits measurement of the stability of the classification performance for each model, which serves as a measure of model reliability. RESULTS: We analyzed a large gene expression dataset with 78 measurements of four different cartilage sample classes. Classifiers trained on subsets of measurements frequently produce models with highly variable performance. Our approach provides reliable classification performance estimates via sampling. In addition to reliable classification performance, we determined stable consensus signatures (i.e. gene lists) for sample classes. Manual literature screening showed that these genes are highly relevant to our gene expression experiment with osteoarthritic cartilage. We compared our approach to others based on a publicly available dataset on breast cancer. AVAILABILITY: R package at http://www.bio.ifi.lmu.de/~davis/edaprakt  相似文献   

2.
3.
The present study was carried out in order to obtain a numerical classifier for the assessment of the malignancy in astrocytomas including glioblastomas ('astrocytomas grade 4'). The attempt resulted in 'TESTAST 268', a classifier based on a reference sample of 268 tumours, 67 in each of four malignancy classes. TESTAST 268 aids the identification of astrocytomas with one of four malignancy classes by means of eight classification variables, five histologic and three non-histologic. Identification is achieved with the aid of linear discriminant functions, both according to Bayes' decision rule (BAYTEST) and by canonical discriminant analysis (CANTEST) using the squared Mahalanobis distance. The discriminant functions with the calibration of the reference sample of the 268 tumours may be implemented on personal and even small pocket computers for practical application.  相似文献   

4.
One of the major goals in cellular neurobiology is the meaningful cell classification. However, in cell classification there are many unresolved issues that need to be addressed. Neuronal classification usually starts with grouping cells into classes according to their main morphological features. If one tries to test quantitatively such a qualitative classification, a considerable overlap in cell types often appears. There is little published information on it. In order to remove the above-mentioned shortcoming, we undertook the present study with the aim to offer a novel method for solving the class overlapping problem. To illustrate our method, we analyzed a sample of 124 neurons from adult human dentate nucleus. Among them we qualitatively selected 55 neurons with small dendritic fields (the small neurons), and 69 asymmetrical neurons with large dendritic fields (the large neurons). We showed that these two samples are normally and independently distributed. By measuring the neuronal soma areas of both samples, we observed that the corresponding normal curves cut each other. We proved that the abscissa of the point of intersection of the curves could represent the boundary between the two adjacent overlapping neuronal classes, since the error done by such division is minimal. Statistical evaluation of the division was also performed.  相似文献   

5.
MOTIVATION: We recently introduced a multivariate approach that selects a subset of predictive genes jointly for sample classification based on expression data. We tested the algorithm on colon and leukemia data sets. As an extension to our earlier work, we systematically examine the sensitivity, reproducibility and stability of gene selection/sample classification to the choice of parameters of the algorithm. METHODS: Our approach combines a Genetic Algorithm (GA) and the k-Nearest Neighbor (KNN) method to identify genes that can jointly discriminate between different classes of samples (e.g. normal versus tumor). The GA/KNN method is a stochastic supervised pattern recognition method. The genes identified are subsequently used to classify independent test set samples. RESULTS: The GA/KNN method is capable of selecting a subset of predictive genes from a large noisy data set for sample classification. It is a multivariate approach that can capture the correlated structure in the data. We find that for a given data set gene selection is highly repeatable in independent runs using the GA/KNN method. In general, however, gene selection may be less robust than classification. AVAILABILITY: The method is available at http://dir.niehs.nih.gov/microarray/datamining CONTACT: LI3@niehs.nih.gov  相似文献   

6.

Background  

MLPA method is a potentially useful semi-quantitative method to detect copy number alterations in targeted regions. In this paper, we propose a method for the normalization procedure based on a non-linear mixed-model, as well as a new approach for determining the statistical significance of altered probes based on linear mixed-model. This method establishes a threshold by using different tolerance intervals that accommodates the specific random error variability observed in each test sample.  相似文献   

7.
Differences between pollen assemblages obtained from lacustrine and terrestrial surface sediments may affect the ability to obtain reliable pollen-based climate reconstructions. We test the effect of combining modern pollen samples from multiple depositional environments on various pollen-based climate reconstruction methods using modern pollen samples from British Columbia, Canada and adjacent Washington, Montana, Idaho and Oregon states. This dataset includes samples from a number of depositional environments including soil and lacustrine sediments.Combining lacustrine and terrestrial (soil) samples increases root mean squared error of prediction (RMSEP) for reconstructions of summer growing degree days when weighted-averaging partial-least-squares (WAPLS), weighted-averaging (WA) and the non-metric-multidimensional-scaling/generalized-additive-models (NMDS/GAM) are used but reduces RMSEP for randomForest, the modern analogue technique (MAT) and the Mixed method, although a slight increase occurs for MAT at the highest sample size. Summer precipitation reconstructions using MAT, randomForest and NMDS/GAM suffer from increased RMSEP when both lacustrine and terrestrial samples are used, but WA, WAPLS and the Mixed method show declines in RMSEP.These results indicate that researchers interested in using pollen databases to reconstruct climate variables need to consider the depositional environments of samples within the analytical dataset since pooled datasets can increase model error for some climate variables. However, since the effects of the pooled datasets will vary between climate variables and between pollen-based climate reconstruction methods we do not reject the use of mixed samples altogether. We finish by proposing steps to test whether significant reductions in model error can be obtained by splitting or combining samples from multiple substrates.  相似文献   

8.
Understanding the functional relationship between the sample size and the performance of species richness estimators is necessary to optimize limited sampling resources against estimation error. Nonparametric estimators such as Chao and Jackknife demonstrate strong performances, but consensus is lacking as to which estimator performs better under constrained sampling. We explore a method to improve the estimators under such scenario. The method we propose involves randomly splitting species‐abundance data from a single sample into two equally sized samples, and using an appropriate incidence‐based estimator to estimate richness. To test this method, we assume a lognormal species‐abundance distribution (SAD) with varying coefficients of variation (CV), generate samples using MCMC simulations, and use the expected mean‐squared error as the performance criterion of the estimators. We test this method for Chao, Jackknife, ICE, and ACE estimators. Between abundance‐based estimators with the single sample, and incidence‐based estimators with the split‐in‐two samples, Chao2 performed the best when CV < 0.65, and incidence‐based Jackknife performed the best when CV > 0.65, given that the ratio of sample size to observed species richness is greater than a critical value given by a power function of CV with respect to abundance of the sampled population. The proposed method increases the performance of the estimators substantially and is more effective when more rare species are in an assemblage. We also show that the splitting method works qualitatively similarly well when the SADs are log series, geometric series, and negative binomial. We demonstrate an application of the proposed method by estimating richness of zooplankton communities in samples of ballast water. The proposed splitting method is an alternative to sampling a large number of individuals to increase the accuracy of richness estimations; therefore, it is appropriate for a wide range of resource‐limited sampling scenarios in ecology.  相似文献   

9.
MOTIVATION: The DNA microarray technology has been increasingly used in cancer research. In the literature, discovery of putative classes and classification to known classes based on gene expression data have been largely treated as separate problems. This paper offers a unified approach to class discovery and classification, which we believe is more appropriate, and has greater applicability, in practical situations. RESULTS: We model the gene expression profile of a tumor sample as from a finite mixture distribution, with each component characterizing the gene expression levels in a class. The proposed method was applied to a leukemia dataset, and good results are obtained. With appropriate choices of genes and preprocessing method, the number of leukemia types and subtypes is correctly inferred, and all the tumor samples are correctly classified into their respective type/subtype. Further evaluation of the method was carried out on other variants of the leukemia data and a colon dataset.  相似文献   

10.

Background

Microarray technology, as well as other functional genomics experiments, allow simultaneous measurements of thousands of genes within each sample. Both the prediction accuracy and interpretability of a classifier could be enhanced by performing the classification based only on selected discriminative genes. We propose a statistical method for selecting genes based on overlapping analysis of expression data across classes. This method results in a novel measure, called proportional overlapping score (POS), of a feature’s relevance to a classification task.

Results

We apply POS, along‐with four widely used gene selection methods, to several benchmark gene expression datasets. The experimental results of classification error rates computed using the Random Forest, k Nearest Neighbor and Support Vector Machine classifiers show that POS achieves a better performance.

Conclusions

A novel gene selection method, POS, is proposed. POS analyzes the expressions overlap across classes taking into account the proportions of overlapping samples. It robustly defines a mask for each gene that allows it to minimize the effect of expression outliers. The constructed masks along‐with a novel gene score are exploited to produce the selected subset of genes.

Electronic supplementary material

The online version of this article (doi:10.1186/1471-2105-15-274) contains supplementary material, which is available to authorized users.  相似文献   

11.

Background

When unaccounted-for group-level characteristics affect an outcome variable, traditional linear regression is inefficient and can be biased. The random- and fixed-effects estimators (RE and FE, respectively) are two competing methods that address these problems. While each estimator controls for otherwise unaccounted-for effects, the two estimators require different assumptions. Health researchers tend to favor RE estimation, while researchers from some other disciplines tend to favor FE estimation. In addition to RE and FE, an alternative method called within-between (WB) was suggested by Mundlak in 1978, although is utilized infrequently.

Methods

We conduct a simulation study to compare RE, FE, and WB estimation across 16,200 scenarios. The scenarios vary in the number of groups, the size of the groups, within-group variation, goodness-of-fit of the model, and the degree to which the model is correctly specified. Estimator preference is determined by lowest mean squared error of the estimated marginal effect and root mean squared error of fitted values.

Results

Although there are scenarios when each estimator is most appropriate, the cases in which traditional RE estimation is preferred are less common. In finite samples, the WB approach outperforms both traditional estimators. The Hausman test guides the practitioner to the estimator with the smallest absolute error only 61% of the time, and in many sample sizes simply applying the WB approach produces smaller absolute errors than following the suggestion of the test.

Conclusions

Specification and estimation should be carefully considered and ultimately guided by the objective of the analysis and characteristics of the data. The WB approach has been underutilized, particularly for inference on marginal effects in small samples. Blindly applying any estimator can lead to bias, inefficiency, and flawed inference.  相似文献   

12.
Multiplexed sequencing relies on specific sample labels, the barcodes, to tag DNA fragments belonging to different samples and to separate the output of the sequencers. However, the barcodes are often corrupted by insertion, deletion and substitution errors introduced during sequencing, which may lead to sample misassignment. In this paper, we propose a barcode construction method, which combines a block error-correction code with a predetermined pseudorandom sequence to generate a base sequence for labeling different samples. Furthermore, to identify the corrupted barcodes for assigning reads to their respective samples, we present a soft decision identification method that consists of inner decoding and outer decoding. The inner decoder establishes the hidden Markov model(HMM) for base insertion/deletion estimation with the pseudorandom sequence, and adapts the forward-backward(FB) algorithm to output the soft information of each bit in the block code. The outer decoder performs soft decision decoding using the soft information to effectively correct multiple errors in the barcodes. Simulation results show that the proposed methods are highly robust to high error rates of insertions, deletions and substitutions in the barcodes. In addition, compared with the inner decoding algorithm of the barcodes based on watermarks, the proposed inner decoding algorithm can greatly reduce the decoding complexity.  相似文献   

13.
Using a quantitative genetic model, this paper compares four different methods for estimating genetic variance components. Given various genetic parameters, data were generated and estimates computed. The number of negative estimates, the sample mean, the sample variance, and the sample mean squared error were computed for each method. It is shown that, if the genetic values are not very small, the traditional MATHER -JINKS method is at least as good as any other method. The ML method might be preferable only if the genetic values are very small and the number of loci large.  相似文献   

14.
JX Mi  JX Liu  J Wen 《PloS one》2012,7(8):e42461
Nearest subspace (NS) classification based on linear regression technique is a very straightforward and efficient method for face recognition. A recently developed NS method, namely the linear regression-based classification (LRC), uses downsampled face images as features to perform face recognition. The basic assumption behind this kind method is that samples from a certain class lie on their own class-specific subspace. Since there are only few training samples for each individual class, which will cause the small sample size (SSS) problem, this problem gives rise to misclassification of previous NS methods. In this paper, we propose two novel LRC methods using the idea that every class-specific subspace has its unique basis vectors. Thus, we consider that each class-specific subspace is spanned by two kinds of basis vectors which are the common basis vectors shared by many classes and the class-specific basis vectors owned by one class only. Based on this concept, two classification methods, namely robust LRC 1 and 2 (RLRC 1 and 2), are given to achieve more robust face recognition. Unlike some previous methods which need to extract class-specific basis vectors, the proposed methods are developed merely based on the existence of the class-specific basis vectors but without actually calculating them. Experiments on three well known face databases demonstrate very good performance of the new methods compared with other state-of-the-art methods.  相似文献   

15.
MOTIVATION: The classification of samples using gene expression profiles is an important application in areas such as cancer research and environmental health studies. However, the classification is usually based on a small number of samples, and each sample is a long vector of thousands of gene expression levels. An important issue in parametric modeling for so many gene expression levels is the control of the number of nuisance parameters in the model. Large models often lead to intensive or even intractable computation, while small models may be inadequate for complex data.Methodology: We propose a two-step empirical Bayes classification method as a solution to this issue. At the first step, we use the model-based cluster algorithm with a non-traditional purpose of assigning gene expression levels to form abundance groups. At the second step, by assuming the same variance for all the genes in the same group, we substantially reduce the number of nuisance parameters in our statistical model. RESULTS: The proposed model is more parsimonious, which leads to efficient computation under an empirical Bayes estimation procedure. We consider two real examples and simulate data using our method. Desired low classification error rates are obtained even when a large number of genes are pre-selected for class prediction.  相似文献   

16.
Basing on the approach by McLachlan (1977) a procedure for the conditional and common error estimation of the classification error in discriminance analysis is described for k ≧ 2 classes. As a rapid procedure for large sample sizes and feature numbers, a modification of the resubstitution method is proposed being favourable with respect to computing time. Both methods provide useful estimations for the probability of misclassification. In calculating the weighting function w, deviations from preconditions known from the MANOVA such as the skewness, the truncation or the inequality of the covariance matrices, hardly play any role; it appears that only a variation of the sample sizes of the classes substantially influences the weighting functions. The error rates of the tested error estimation methods likewise in effect depend on the sample sizes of the classes. Violations of the mentioned preconditions in the form described above result in different variations of the error estimates, depending on these sample sizes. A comparison between error estimation and allocation relative to a simulated population demonstrates the goodness of the used error estimation procedures.  相似文献   

17.
This paper studies the problem of building multiclass classifiers for tissue classification based on gene expression. The recent development of microarray technologies has enabled biologists to quantify gene expression of tens of thousands of genes in a single experiment. Biologists have begun collecting gene expression for a large number of samples. One of the urgent issues in the use of microarray data is to develop methods for characterizing samples based on their gene expression. The most basic step in the research direction is binary sample classification, which has been studied extensively over the past few years. This paper investigates the next step-multiclass classification of samples based on gene expression. The characteristics of expression data (e.g. large number of genes with small sample size) makes the classification problem more challenging. The process of building multiclass classifiers is divided into two components: (i) selection of the features (i.e. genes) to be used for training and testing and (ii) selection of the classification method. This paper compares various feature selection methods as well as various state-of-the-art classification methods on various multiclass gene expression datasets. Our study indicates that multiclass classification problem is much more difficult than the binary one for the gene expression datasets. The difficulty lies in the fact that the data are of high dimensionality and that the sample size is small. The classification accuracy appears to degrade very rapidly as the number of classes increases. In particular, the accuracy was very low regardless of the choices of the methods for large-class datasets (e.g. NCI60 and GCM). While increasing the number of samples is a plausible solution to the problem of accuracy degradation, it is important to develop algorithms that are able to analyze effectively multiple-class expression data for these special datasets.  相似文献   

18.
Serial analysis of gene expression (SAGE) is a technology for quantifying gene expression in biological tissue that yields count data that can be modeled by a multinomial distribution with two characteristics: skewness in the relative frequencies and small sample size relative to the dimension. As a result of these characteristics, a given SAGE sample may fail to capture a large number of expressed mRNA species present in the tissue. Empirical estimators of mRNA species' relative abundance effectively ignore these missing species, and as a result tend to overestimate the abundance of the scarce observed species comprising a vast majority of the total. We have developed a new Bayesian estimation procedure that quantifies our prior information about these characteristics, yielding a nonlinear shrinkage estimator with efficiency advantages over the MLE. Our prior is mixture of Dirichlets, whereby species are stochastically partitioned into abundant and scarce classes, each with its own multivariate prior. Simulation studies reveal our estimator has lower integrated mean squared error (IMSE) than the MLE for the SAGE scenarios simulated, and yields relative abundance profiles closer in Euclidean distance to the truth for all samples simulated. We apply our method to a SAGE library of normal colon tissue, and discuss its implications for assessing differential expression.  相似文献   

19.
McClure NS  Whitlock MC 《Heredity》2012,109(3):173-179
We describe a new method of estimating the selfing rate (S) in a mixed mating population based on a population structure approach that accounts for possible intergenerational correlation in selfing rate, giving rise to an estimate of the upper limit for heritability of selfing rate (h(2)). A correlation between generations in selfing rate is shown to affect one- and two-locus probabilities of identity by descent. Conventional estimates of selfing rate based on a population structure approach are positively biased by intergenerational correlation in selfing. Multilocus genotypes of individuals are used to give maximum-likelihood estimates of S and h(2) in the presence of scoring artifacts. Our multilocus estimation of selfing rate and its heritability (MESH) method was tested with simulated data for a range of conditions. Selfing rate estimates from MESH have low bias and root mean squared error, whereas estimates of the heritability of selfing rate have more uncertainty. Increasing the number of individuals in a sample helps to reduce bias and root mean squared error more than increasing the number of loci of sampled individuals. Improved estimates of selfing rate, as well as estimates of its heritability, can be obtained with this method, although a large number of loci and individuals are needed to achieve best results.  相似文献   

20.
For small samples, classifier design algorithms typically suffer from overfitting. Given a set of features, a classifier must be designed and its error estimated. For small samples, an error estimator may be unbiased but, owing to a large variance, often give very optimistic estimates. This paper proposes mitigating the small-sample problem by designing classifiers from a probability distribution resulting from spreading the mass of the sample points to make classification more difficult, while maintaining sample geometry. The algorithm is parameterized by the variance of the spreading distribution. By increasing the spread, the algorithm finds gene sets whose classification accuracy remains strong relative to greater spreading of the sample. The error gives a measure of the strength of the feature set as a function of the spread. The algorithm yields feature sets that can distinguish the two classes, not only for the sample data, but for distributions spread beyond the sample data. For linear classifiers, the topic of the present paper, the classifiers are derived analytically from the model, thereby providing an enormous savings in computation time. The algorithm is applied to cancer classification via cDNA microarrays. In particular, the genes BRCA1 and BRCA2 are associated with a hereditary disposition to breast cancer, and the algorithm is used to find gene sets whose expressions can be used to classify BRCA1 and BRCA2 tumors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号