首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 265 毫秒
1.
基于SVM和平均影响值的人肿瘤信息基因提取   总被引:1,自引:0,他引:1       下载免费PDF全文
基于基因表达谱的肿瘤分类信息基因选取是发现肿瘤特异表达基因、探索肿瘤基因表达模式的重要手段。借助由基因表达谱获得的分类信息进行肿瘤诊断是当今生物信息学领域中的一个重要研究方向,有望成为临床医学上一种快速而有效的肿瘤分子诊断方法。鉴于肿瘤基因表达谱样本数据维数高、样本量小以及噪音大等特点,提出一种结合支持向量机应用平均影响值来寻找肿瘤信息基因的算法,其优点是能够搜索到基因数量尽可能少而分类能力尽可能强的多个信息基因子集。采用二分类肿瘤数据集验证算法的可行性和有效性,对于结肠癌样本集,只需3个基因就能获得100%的留一法交叉验证识别准确率。为避免样本集的不同划分对分类性能的影响,进一步采用全折交叉验证方法来评估各信息基因子集的分类性能,优选出更可靠的信息基因子集。与基它肿瘤分类方法相比,实验结果在信息基因数量以及分类性能方面具有明显的优势。  相似文献   

2.
基于肿瘤基因表达谱的肿瘤分类是生物信息学的一个重要研究内容。传统的肿瘤信息特征提取方法大多基于信息基因选择方法,但是在筛选基因时,不可避免的会造成分类信息的流失。提出了一种基于邻接矩阵分解的肿瘤亚型特征提取方法,首先对肿瘤基因表达谱数据构造高斯权邻接矩阵,接着对邻接矩阵进行奇异值分解,最后将分解得到的正交矩阵特征行向量作为分类特征输入支持向量机进行分类识别。采用留一法对白血病两个亚型的基因表达谱数据集进行实验,实验结果证明了该方法的可行性和有效性。  相似文献   

3.
基于基因表达谱的肿瘤分型和特征基因选取   总被引:20,自引:0,他引:20  
在分析基因表达谱数据特性的基础上,提出了一个将之用于肿瘤分子分型和选型和选取相应亚型特征基因的策略。该策略包括三个步骤:首先采用一个无监督的基因过滤算法以降低用于分型计算的数据的噪声,其次提出了一个概率模型对样本中的分类结构进行建模,最后基于聚类的结果采用相对熵的方法获得对分类贡献大的基因作为特征基因,应用该策略对两个公开发表的数据集进行了再挖掘,结果表明不但获得了其他方法可以得到的信息,而且还提供了更精细、更具有显著生物学意义的信息,具有明显的优越性。  相似文献   

4.
主元余像集主成分分析在蛋白质质谱数据中的应用   总被引:1,自引:1,他引:0  
癌蛋白质谱数据中包含了大量未知的内部结构和变量。针对癌蛋白质谱数据这些特点,在总结主元余像集主成分分析(二次主成分分析)应用的基础上,提出了用t-验证方法进行特征子集选取,然后用主元余像集主成分分析提取特征,以线性判别分析进行分类的新方法。通过对典型癌蛋白质谱数据的分类实验,证明该方法不但识别率高,而且需要选取的特征子集小,分类速度快,提高了方法的准确性与分类速度。  相似文献   

5.
基于决策森林特征基因的两种识别方法   总被引:1,自引:0,他引:1  
应用DNA芯片可获得成千上万个基因的表达谱数据。寻找对疾病有鉴别力的特征基因 ,滤掉与疾病无关的基因是基因表达谱数据分析的关键问题。利用决策森林方法的集成优势 ,提出基于决策森林的两种特征基因识别方法。该方法先由决策森林按照一定的显著性水平滤掉大部分与疾病类别无关的基因 ,然后采用统计频数法和扰动法 ,根据所选特征对分类的贡献程度对初选的特征基因作更加精细地选择。最后 ,选用神经网络作为外部分类器对所选的特征基因子集进行评价 ,将提出的方法应用于 4 0例结肠癌组织与 2 2例正常组织中 2 0 0 0个基因的表达谱实验数据。结果表明 :上述两种方法选出的特征基因均具有较高的疾病鉴别能力 ,均可获得最优特征基因子集 ,基于决策森林的统计频数法优于扰动法。  相似文献   

6.
遗传优化算法在基因数据分类中的应用   总被引:1,自引:0,他引:1  
本文提出了一种基于遗传算法的基因微阵列数据特征提取方法。首先对原始数据进行标准化,然后利用方差分析方法对数据进行降低维数处理,最后利用遗传算法对数据进行优化。针对基因数据对遗传算子和适应度函数进行设置,优化数据集选取特征基因,得到较小的特征子集。为了验证选取的特征,利用样本划分法通过判别分析建立分类器进行判定。实验论证此方法具有理想的分类效果,算法稳定、效率高。  相似文献   

7.
基于基因表达谱的肿瘤特异基因表达模式研究   总被引:1,自引:1,他引:0  
基于肿瘤基因表达谱, 利用生物信息学的方法, 从肿瘤与正常组织的样本分类入手就肿瘤特异表达基因的发现及其表达模式问题进行了分析和研究, 进而探讨了肿瘤在基因表达上的特点. 首先, 在分析肿瘤基因表达谱特点的基础上, 提出了基于Relief算法的样本分类特征基因选取策略; 然后, 以支持向量机为分类工具进行样本类型的识别, 以分类错误率为标准选取样本分类特征基因, 并对其中反映肿瘤与正常样本组织构成特点的组织特异表达基因进行排除以突出肿瘤样本真实的类别特征; 最后结合统计学方法, 从信息学的角度论证了分类特征基因在肿瘤组织中特异表达的确实性和普遍性, 并对这些基因在肿瘤组织中呈现出的特异的表达模式进行了分析.  相似文献   

8.
王蕊平  王年  苏亮亮  陈乐 《生物信息学》2011,9(2):164-166,170
海量数据的存在是现代信息社会的一大特点,如何在成千上万的基因中有效地选出样本的分类特征对癌症的诊治具有重要意义。采用局部非负矩阵分解方法对癌症基因表达谱数据进行特征提取。首先对基因表达谱数据进行筛选,然后构造局部非负矩阵并对其进行分解得到维数低、能充分表征样本的特征向量,最后用支持向量机对特征向量进行分类。结果表明该方法的可行性和有效性。  相似文献   

9.
随机森林:一种重要的肿瘤特征基因选择法   总被引:2,自引:0,他引:2  
特征选择技术已经被广泛地应用于生物信息学科,随机森林(random forests,RF)是其中一种重要的特征选择方法。利用RF对胃癌、结肠癌和肺癌等5组基因表达谱数据进行特征基因选择,将选择结果与支持向量机(support vector machine,SVM)结合对原数据集分类,并对特征基因选择及分类结果进行初步的分析。同时使用微阵列显著性分析(significant analysis of microarray,SAM)和ReliefF法与RF比较,结果显示随机森林选择的特征基因包含更多分类信息,分类准确率更高。结合该方法自身具有的分类方面的诸多优势,随机森林可以作为一种可靠的基因表达谱数据分析手段被广泛使用。  相似文献   

10.
肝癌是中国最常见的恶性肿瘤之一。基于肿瘤基因表达谱数据的分析与研究是当今研究的热点,对于癌症的早期诊断、治疗具有十分重要的意义。针对高维小样本基因表达谱数据所显现的变量间严重共线性、类别变量与预测变量的非线性关系,采用了基于样条变换的偏最小二乘回归新技术。首先通过筛选法去除基因表达谱数据中的冗余信息,然后以3次B基样条变换实现非线性基因表达谱数据的线性化重构,随后将重构的矩阵交由偏最小二乘法构建类别变量与预测变量间的关系模型。最后,通过对肝癌肿瘤基因表达谱数据的分析,结果显示此分类模型对数据重构稳健,有效的解决了高维小样本基因表达谱数据间的过拟合和变量间的共线性,具有较高的拟合和分类正确率。  相似文献   

11.
MOTIVATION: Gene set analysis allows formal testing of subtle but coordinated changes in a group of genes, such as those defined by Gene Ontology (GO) or KEGG Pathway databases. We propose a new method for gene set analysis that is based on principal component analysis (PCA) of genes expression values in the gene set. PCA is an effective method for reducing high dimensionality and capture variations in gene expression values. However, one limitation with PCA is that the latent variable identified by the first PC may be unrelated to outcome. RESULTS: In the proposed supervised PCA (SPCA) model for gene set analysis, the PCs are estimated from a selected subset of genes that are associated with outcome. As outcome information is used in the gene selection step, this method is supervised, thus called the Supervised PCA model. Because of the gene selection step, test statistic in SPCA model can no longer be approximated well using t-distribution. We propose a two-component mixture distribution based on Gumbel exteme value distributions to account for the gene selection step. We show the proposed method compares favorably to currently available gene set analysis methods using simulated and real microarray data. SOFTWARE: The R code for the analysis used in this article are available upon request, we are currently working on implementing the proposed method in an R package.  相似文献   

12.
MOTIVATION: Since DNA microarray experiments provide us with huge amount of gene expression data, they should be analyzed with statistical methods to extract the meanings of experimental results. Some dimensionality reduction methods such as Principal Component Analysis (PCA) are used to roughly visualize the distribution of high dimensional gene expression data. However, in the case of binary classification of gene expression data, PCA does not utilize class information when choosing axes. Thus clearly separable data in the original space may not be so in the reduced space used in PCA. RESULTS: For visualization and class prediction of gene expression data, we have developed a new SVM-based method called multidimensional SVMs, that generate multiple orthogonal axes. This method projects high dimensional data into lower dimensional space to exhibit properties of the data clearly and to visualize a distribution of the data roughly. Furthermore, the multiple axes can be used for class prediction. The basic properties of conventional SVMs are retained in our method: solutions of mathematical programming are sparse, and nonlinear classification is implemented implicitly through the use of kernel functions. The application of our method to the experimentally obtained gene expression datasets for patients' samples indicates that our algorithm is efficient and useful for visualization and class prediction. CONTACT: komura@hal.rcast.u-tokyo.ac.jp.  相似文献   

13.
In analysis of bioinformatics data, a unique challenge arises from the high dimensionality of measurements. Without loss of generality, we use genomic study with gene expression measurements as a representative example but note that analysis techniques discussed in this article are also applicable to other types of bioinformatics studies. Principal component analysis (PCA) is a classic dimension reduction approach. It constructs linear combinations of gene expressions, called principal components (PCs). The PCs are orthogonal to each other, can effectively explain variation of gene expressions, and may have a much lower dimensionality. PCA is computationally simple and can be realized using many existing software packages. This article consists of the following parts. First, we review the standard PCA technique and their applications in bioinformatics data analysis. Second, we describe recent 'non-standard' applications of PCA, including accommodating interactions among genes, pathways and network modules and conducting PCA with estimating equations as opposed to gene expressions. Third, we introduce several recently proposed PCA-based techniques, including the supervised PCA, sparse PCA and functional PCA. The supervised PCA and sparse PCA have been shown to have better empirical performance than the standard PCA. The functional PCA can analyze time-course gene expression data. Last, we raise the awareness of several critical but unsolved problems related to PCA. The goal of this article is to make bioinformatics researchers aware of the PCA technique and more importantly its most recent development, so that this simple yet effective dimension reduction technique can be better employed in bioinformatics data analysis.  相似文献   

14.
JX Liu  Y Xu  CH Zheng  Y Wang  JY Yang 《PloS one》2012,7(7):e38873
Conventional gene selection methods based on principal component analysis (PCA) use only the first principal component (PC) of PCA or sparse PCA to select characteristic genes. These methods indeed assume that the first PC plays a dominant role in gene selection. However, in a number of cases this assumption is not satisfied, so the conventional PCA-based methods usually provide poor selection results. In order to improve the performance of the PCA-based gene selection method, we put forward the gene selection method via weighting PCs by singular values (WPCS). Because different PCs have different importance, the singular values are exploited as the weights to represent the influence on gene selection of different PCs. The ROC curves and AUC statistics on artificial data show that our method outperforms the state-of-the-art methods. Moreover, experimental results on real gene expression data sets show that our method can extract more characteristic genes in response to abiotic stresses than conventional gene selection methods.  相似文献   

15.
Clustering is commonly used for analyzing gene expression data. Despite their successes, clustering methods suffer from a number of limitations. First, these methods reveal similarities that exist over all of the measurements, while obscuring relationships that exist over only a subset of the data. Second, clustering methods cannot readily incorporate additional types of information, such as clinical data or known attributes of genes. To circumvent these shortcomings, we propose the use of a single coherent probabilistic model, that encompasses much of the rich structure in the genomic expression data, while incorporating additional information such as experiment type, putative binding sites, or functional information. We show how this model can be learned from the data, allowing us to discover patterns in the data and dependencies between the gene expression patterns and additional attributes. The learned model reveals context-specific relationships, that exist only over a subset of the experiments in the dataset. We demonstrate the power of our approach on synthetic data and on two real-world gene expression data sets for yeast. For example, we demonstrate a novel functionality that falls naturally out of our framework: predicting the "cluster" of the array resulting from a gene mutation based only on the gene's expression pattern in the context of other mutations.  相似文献   

16.
Recent developments in microarray technology make it possible to capture the gene expression profiles for thousands of genes at once. With this data researchers are tackling problems ranging from the identification of 'cancer genes' to the formidable task of adding functional annotations to our rapidly growing gene databases. Specific research questions suggest patterns of gene expression that are interesting and informative: for instance, genes with large variance or groups of genes that are highly correlated. Cluster analysis and related techniques are proving to be very useful. However, such exploratory methods alone do not provide the opportunity to engage in statistical inference. Given the high dimensionality (thousands of genes) and small sample sizes (often <30) encountered in these datasets, an honest assessment of sampling variability is crucial and can prevent the over-interpretation of spurious results. We describe a statistical framework that encompasses many of the analytical goals in gene expression analysis; our framework is completely compatible with many of the current approaches and, in fact, can increase their utility. We propose the use of a deterministic rule, applied to the parameters of the gene expression distribution, to select a target subset of genes that are of biological interest. In addition to subset membership, the target subset can include information about relationships between genes, such as clustering. This target subset presents an interesting parameter that we can estimate by applying the rule to the sample statistics of microarray data. The parametric bootstrap, based on a multivariate normal model, is used to estimate the distribution of these estimated subsets and relevant summary measures of this sampling distribution are proposed. We focus on rules that operate on the mean and covariance. Using Bernstein's Inequality, we obtain consistency of the subset estimates, under the assumption that the sample size converges faster to infinity than the logarithm of the number of genes. We also provide a conservative sample size formula guaranteeing that the sample mean and sample covariance matrix are uniformly within a distance epsilon > 0 of the population mean and covariance. The practical performance of the method using a cluster-based subset rule is illustrated with a simulation study. The method is illustrated with an analysis of a publicly available leukemia data set.  相似文献   

17.
Microarray technologies, which can measure tens of thousands of gene expression values simultaneously in a single experiment, have become a common research method for biomedical researchers. Computational tools to analyze microarray data for biological discovery are needed. In this paper, we investigate the feasibility of using formal concept analysis (FCA) as a tool for microarray data analysis. The method of FCA builds a (concept) lattice from the experimental data together with additional biological information. For microarray data, each vertex of the lattice corresponds to a subset of genes that are grouped together according to their expression values and some biological information related to gene function. The lattice structure of these gene sets might reflect biological relationships in the dataset. Similarities and differences between experiments can then be investigated by comparing their corresponding lattices according to various graph measures. We apply our method to microarray data derived from influenza-infected mouse lung tissue and healthy controls. Our preliminary results show the promise of our method as a tool for microarray data analysis.  相似文献   

18.
Analysis of multivariate data sets from, for example, microarray studies frequently results in lists of genes which are associated with some response of interest. The biological interpretation is often complicated by the statistical instability of the obtained gene lists, which may partly be due to the functional redundancy among genes, implying that multiple genes can play exchangeable roles in the cell. In this paper, we use the concept of exchangeability of random variables to model this functional redundancy and thereby account for the instability. We present a flexible framework to incorporate the exchangeability into the representation of lists. The proposed framework supports straightforward comparison between any 2 lists. It can also be used to generate new more stable gene rankings incorporating more information from the experimental data. Using 2 microarray data sets, we show that the proposed method provides more robust gene rankings than existing methods with respect to sampling variations, without compromising the biological significance of the rankings.  相似文献   

19.
Relationships among gene expression levels may be associated with the mechanisms of the disease. While identifying a direct association such as a difference in expression levels between case and control groups links genes to disease mechanisms, uncovering an indirect association in the form of a network structure may help reveal the underlying functional module associated with the disease under scrutiny. This paper presents a method to improve the biological relevance in functional module identification from the gene expression microarray data by enhancing the structure of a weighted gene co-expression network using minimum spanning tree. The enhanced network, which is called a backbone network, contains only the essential structural information to represent the gene co-expression network. The entire backbone network is decoupled into a number of coherent sub-networks, and then the functional modules are reconstructed from these sub-networks to ensure minimum redundancy. The method was tested with a simulated gene expression dataset and case-control expression datasets of autism spectrum disorder and colorectal cancer studies. The results indicate that the proposed method can accurately identify clusters in the simulated dataset, and the functional modules of the backbone network are more biologically relevant than those obtained from the original approach.  相似文献   

20.
《Process Biochemistry》2007,42(3):392-400
The expression levels of 96 genes were characterized and differentiated using a cDNA microarray after the bacterium Escherichia coli was exposed to numerous toxic chemicals. In all, the effects of 14 different chemicals and 1 mixture were investigated using 1-h exposure data to provide information about physiological changes brought on by the stress experienced and interaction of chemical–gene expression. Hierarchical clustering analysis showed that the genes could be sub-grouped based upon their expression patterns while each also showed unique signatures to each chemical tested when examined using a principal component analysis (PCA). By constructing a chemical–gene expression profiling based on changes in the expression of the genes for each chemical, we were able to identify the chemicals effects and gene targets more systematically. Despite the fact that only a small number of genes were used for gene expression analysis, they were sufficient to discriminate between the effects of each exposure. It was found that the use of a single time point for expression analysis was insufficient for interpreting the effects a given chemical has on the bacterium. Such information cannot be obtained from conventional toxicity studies, demonstrating that chemical–gene expression profiling method based on the hierarchical clustering analysis and principal component analysis (PCA) in toxicity monitoring offers a new perspective for bio-monitoring and information on dynamic changes occurring at the sub-cellular level.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号