首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
Model-based deconvolution of genome-wide DNA binding   总被引:1,自引:0,他引:1  
Motivation: Chromatin immunoprecipitation followed by hybridizationto a genomic tiling microarray (ChIP-chip) is a routinely usedprotocol for localizing the genomic targets of DNA-binding proteins.The resolution to which binding sites in this assay can be identifiedis commonly considered to be limited by two factors: (1) theresolution at which the genomic targets are tiled in the microarrayand (2) the large and variable lengths of the immunoprecipitatedDNA fragments. Results: We have developed a generative model of binding sitesin ChIP-chip data and an approach, MeDiChI, for efficientlyand robustly learning that model from diverse data sets. Wehave evaluated MeDiChI's performance using simulated data, aswell as on several diverse ChIP-chip data sets collected onwidely different tiling array platforms for two different organisms(Saccharomyces cerevisiae and Halobacterium salinarium NRC-1).We find that MeDiChI accurately predicts binding locations toa resolution greater than that of the probe spacing, even foroverlapping peaks, and can increase the effective resolutionof tiling array data by a factor of 5x or better. Moreover,the method's performance on simulated data provides insightsinto effectively optimizing the experimental design for increasedbinding site localization accuracy and efficacy. Availability: MeDiChI is available as an open-source R package,including all data, from http://baliga.systemsbiology.net/medichi. Contact: dreiss{at}systemsbiology.org Supplementary information: Supplementary data are availableat Bioinformatics online. Associate Editor: Martin Bishop  相似文献   

3.
Motivation: Recent improvements in high-throughput Mass Spectrometry(MS) technology have expedited genome-wide discovery of protein–proteininteractions by providing a capability of detecting proteincomplexes in a physiological setting. Computational inferenceof protein interaction networks and protein complexes from MSdata are challenging. Advances are required in developing robustand seamlessly integrated procedures for assessment of protein–proteininteraction affinities, mathematical representation of proteininteraction networks, discovery of protein complexes and evaluationof their biological relevance. Results: A multi-step but easy-to-follow framework for identifyingprotein complexes from MS pull-down data is introduced. It assessesinteraction affinity between two proteins based on similarityof their co-purification patterns derived from MS data. It constructsa protein interaction network by adopting a knowledge-guidedthreshold selection method. Based on the network, it identifiesprotein complexes and infers their core components using a graph-theoreticalapproach. It deploys a statistical evaluation procedure to assessbiological relevance of each found complex. On Saccharomycescerevisiae pull-down data, the framework outperformed othermore complicated schemes by at least 10% in F1-measure and identified610 protein complexes with high-functional homogeneity basedon the enrichment in Gene Ontology (GO) annotation. Manual examinationof the complexes brought forward the hypotheses on cause offalse identifications. Namely, co-purification of differentprotein complexes as mediated by a common non-protein molecule,such as DNA, might be a source of false positives. Protein identificationbias in pull-down technology, such as the hydrophilic bias couldresult in false negatives. Contact: samatovan{at}ornl.gov Supplementary information: Supplementary data are availableat Bioinformatics online. Associate Editor: Jonathan Wren Present address: Department of Biomedical Informatics, VanderbiltUniversity, Nashville, TN 37232. The authors wish it to be known that, in their opinion, thefirst two authors should be regarded as joint First Authors.  相似文献   

4.
Motivation: High-throughput experimental and computational methodsare generating a wealth of protein–protein interactiondata for a variety of organisms. However, data produced by currentstate-of-the-art methods include many false positives, whichcan hinder the analyses needed to derive biological insights.One way to address this problem is to assign confidence scoresthat reflect the reliability and biological significance ofeach interaction. Most previously described scoring methodsuse a set of likely true positives to train a model to scoreall interactions in a dataset. A single positive training set,however, may be biased and not representative of true interactionspace. Results: We demonstrate a method to score protein interactionsby utilizing multiple independent sets of training positivesto reduce the potential bias inherent in using a single trainingset. We used a set of benchmark yeast protein interactions toshow that our approach outperforms other scoring methods. Ourapproach can also score interactions across data types, whichmakes it more widely applicable than many previously proposedmethods. We applied the method to protein interaction data fromboth Drosophila melanogaster and Homo sapiens. Independent evaluationsshow that the resulting confidence scores accurately reflectthe biological significance of the interactions. Contact: rfinley{at}wayne.edu Supplementary information: Supplementary data are availableat Bioinformatics Online. Associate Editor: Burkhard Rost  相似文献   

5.
Motivation: Understanding the complexity in gene–phenotyperelationship is vital for revealing the genetic basis of commondiseases. Recent studies on the basis of human interactome andphenome not only uncovers prevalent phenotypic overlap and geneticoverlap between diseases, but also reveals a modular organizationof the genetic landscape of human diseases, providing new opportunitiesto reduce the complexity in dissecting the gene–phenotypeassociation. Results: We provide systematic and quantitative evidence thatphenotypic overlap implies genetic overlap. With these results,we perform the first heterogeneous alignment of human interactomeand phenome via a network alignment technique and identify 39disease families with corresponding causative gene networks.Finally, we propose AlignPI, an alignment-based framework topredict disease genes, and identify plausible candidates for70 diseases. Our method scales well to the whole genome, asdemonstrated by prioritizing 6154 genes across 37 chromosomeregions for Crohn's disease (CD). Results are consistent witha recent meta-analysis of genome-wide association studies forCD. Availability: Bi-modules and disease gene predictions are freelyavailable at the URL http://bioinfo.au.tsinghua.edu.cn/alignpi/ Contact: ruijiang{at}tsinghua.edu.cn Supplementary information: Supplementary data are availableat Bioinformatics online. Associate Editor: Trey Ideker  相似文献   

6.
Motivation: Reliable structural modelling of protein–proteincomplexes has widespread application, from drug design to advancingour knowledge of protein interactions and function. This workaddresses three important issues in protein–protein docking:implementing backbone flexibility, incorporating prior indicationsfrom experiment and bioinformatics, and providing public accessvia a server. 3D-Garden (Global And Restrained Docking ExplorationNexus), our benchmarked and server-ready flexible docking system,allows sophisticated programming of surface patches by the uservia a facet representation of the interactors’ molecularsurfaces (generated with the marching cubes algorithm). Flexibilityis implemented as a weighted exhaustive conformer search foreach clashing pair of molecular branches in a set of 5000 modelsfiltered from around 340 000 initially. Results: In a non-global assessment, carried out strictly accordingto the protocols for number of models considered and model qualityof the Critical Assessment of Protein Interactions (CAPRI) experiment,over the widely-used Benchmark 2.0 of 84 complexes, 3D-Gardenidentifies a set of ten models containing an acceptable or bettermodel in 29/45 test cases, including one with large conformationalchange. In 19/45 cases an acceptable or better model is rankedfirst or second out of 340 000 candidates. Availability: http://www.sbg.bio.ic.ac.uk/3dgarden (server) Contact: v.lesk{at}ic.ac.uk Supplementary information: Supplementary data are availableat Bioinformatics online. Associate Editor: Burkhard Rost  相似文献   

7.
MMG: a probabilistic tool to identify submodules of metabolic pathways   总被引:1,自引:0,他引:1  
Motivation: A fundamental task in systems biology is the identificationof groups of genes that are involved in the cellular responseto particular signals. At its simplest level, this often reducesto identifying biological quantities (mRNA abundance, enzymeconcentrations, etc.) which are differentially expressed intwo different conditions. Popular approaches involve using t-teststatistics, based on modelling the data as arising from a mixturedistribution. A common assumption of these approaches is thatthe data are independent and identically distributed; however,biological quantities are usually related through a complex(weighted) network of interactions, and often the more pertinentquestion is which subnetworks are differentially expressed,rather than which genes. Furthermore, in many interesting cases(such as high-throughput proteomics and metabolomics), onlyvery partial observations are available, resulting in the needfor efficient imputation techniques. Results: We introduce Mixture Model on Graphs (MMG), a novelprobabilistic model to identify differentially expressed submodulesof biological networks and pathways. The method can easily incorporateinformation about weights in the network, is robust againstmissing data and can be easily generalized to directed networks.We propose an efficient sampling strategy to infer posteriorprobabilities of differential expression, as well as posteriorprobabilities over the model parameters. We assess our methodon artificial data demonstrating significant improvements overstandard mixture model clustering. Analysis of our model resultson quantitative high-throughput proteomic data leads to theidentification of biologically significant subnetworks, as wellas the prediction of the expression level of a number of enzymes,some of which are then verified experimentally. Availability: MATLAB code is available from http://www.dcs.shef.ac.uk/~guido/software.html Contact: guido{at}dcs.shef.ac.uk Supplementary information: Supplementary data are availableat Bioinformatics online. Associate Editor: Jonathan Wren  相似文献   

8.
The ability to rank proteins by their likely success in crystallizationis useful in current Structural Biology efforts and in particularin high-throughput Structural Genomics initiatives. We presentParCrys, a Parzen Window approach to estimate a protein's propensityto produce diffraction-quality crystals. The Protein Data Bank(PDB) provided training data whilst the databases TargetDB andPepcDB were used to define feature selection data as well astest data independent of feature selection and training. ParCrysoutperforms the OB-Score, SECRET and CRYSTALP on the data examined,with accuracy and Matthews correlation coefficient values of79.1% and 0.582, respectively (74.0% and 0.227, respectively,on data with a ‘real-world’ ratio of positive:negativeexamples). ParCrys predictions and associated data are availablefrom www.compbio.dundee.ac.uk/parcrys. Contact: geoff{at}compbio.dundee.ac.uk Supplementary information: Supplementary data are availableat Bioinformatics online. Associate Editor: John Quackenbush  相似文献   

9.
10.
Motivation: In searching for differentially expressed (DE) genesin microarray data, we often observe a fraction of the genesto have unequal variability between groups. This is not an issuein large samples, where a valid test exists that uses individualvariances separately. The problem arises in the small-samplesetting, where the approximately valid Welch test lacks sensitivity,while the more sensitive moderated t-test assumes equal variance. Methods: We introduce a moderated Welch test (MWT) that allowsunequal variance between groups. It is based on (i) weightingof pooled and unpooled standard errors and (ii) improved estimationof the gene-level variance that exploits the information fromacross the genes. Results: When a non-trivial proportion of genes has unequalvariability, false discovery rate (FDR) estimates based on thestandard t and moderated t-tests are often too optimistic, whilethe standard Welch test has low sensitivity. The MWT is shownto (i) perform better than the standard t, the standard Welchand the moderated t-tests when the variances are unequal betweengroups and (ii) perform similarly to the moderated t, and betterthan the standard t and Welch tests when the group variancesare equal. These results mean that MWT is more reliable thanother existing tests over wider range of data conditions. Availability: R package to perform MWT is available at http://www.meb.ki.se/~yudpaw Contact: yudi.pawitan{at}ki.se Supplementary information: Supplementary data are availableat Bioinformatics online. Associate Editor: Martin Bishop  相似文献   

11.
Motivation: After 10-year investigations, the folding mechanismsof β-hairpins are still under debate. Experiments stronglysupport zip-out pathway, while most simulations prefer the hydrophobiccollapse model (including middle-out and zip-in pathways). Inthis article, we show that all pathways can occur during thefolding of β-hairpins but with different probabilities.The zip-out pathway is the most probable one. This is in agreementwith the experimental results. We came to our conclusions by38 100-ns room-temperature all-atom molecular dynamics simulationsof the β-hairpin trpzip2. Our results may help to clarifythe inconsistencies in the current pictures of β-hairpinfolding mechanisms. Contact: yxiao{at}mail.hust.edu.cn Supplementary information: Supplementary data are availableat Bioinformatics online. Associate Editor: Anna Tramontano  相似文献   

12.
Motivation: Most genome-wide association studies rely on singlenucleotide polymorphism (SNP) analyses to identify causal loci.The increased stringency required for genome-wide analyses (withper-SNP significance threshold typically 10–7) meansthat many real signals will be missed. Thus it is still highlyrelevant to develop methods with improved power at low typeI error. Haplotype-based methods provide a promising approach;however, they suffer from statistical problems such as abundanceof rare haplotypes and ambiguity in defining haplotype blockboundaries. Results: We have developed an ancestral haplotype clustering(AncesHC) association method which addresses many of these problems.It can be applied to biallelic or multiallelic markers typedin haploid, diploid or multiploid organisms, and also handlesmissing genotypes. Our model is free from the assumption ofa rigid block structure but recognizes a block-like structureif it exists in the data. We employ a Hidden Markov Model (HMM)to cluster the haplotypes into groups of predicted common ancestralorigin. We then test each cluster for association with diseaseby comparing the numbers of cases and controls with 0, 1 and2 chromosomes in the cluster. We demonstrate the power of thisapproach by simulation of case-control status under a rangeof disease models for 1500 outcrossed mice originating fromeight inbred lines. Our results suggest that AncesHC has substantiallymore power than single-SNP analyses to detect disease association,and is also more powerful than the cladistic haplotype clusteringmethod CLADHC. Availability: The software can be downloaded from http://www.imperial.ac.uk/medicine/people/l.coin Contact: I.coin{at}imperial.ac.uk Supplementary Information: Supplementary data are availableat Bioinformatics online. Associate Editor: Martin Bishop  相似文献   

13.
14.
Motivation: Pair-wise residue-residue contacts in proteins canbe predicted from both threading templates and sequence-basedmachine learning. However, most structure modeling approachesonly use the template-based contact predictions in guiding thesimulations; this is partly because the sequence-based contactpredictions are usually considered to be less accurate thanthat by threading. With the rapid progress in sequence databasesand machine-learning techniques, it is necessary to have a detailedand comprehensive assessment of the contact-prediction methodsin different template conditions. Results: We develop two methods for protein-contact predictions:SVM-SEQ is a sequence-based machine learning approach whichtrains a variety of sequence-derived features on contact maps;SVM-LOMETS collects consensus contact predictions from multiplethreading templates. We test both methods on the same set of554 proteins which are categorized into ‘Easy’,‘Medium’, ‘Hard’ and ‘Very Hard’targets based on the evolutionary and structural distance betweentemplates and targets. For the Easy and Medium targets, SVM-LOMETSobviously outperforms SVM-SEQ; but for the Hard and Very Hardtargets, the accuracy of the SVM-SEQ predictions is higher thanthat of SVM-LOMETS by 12–25%. If we combine the SVM-SEQand SVM-LOMETS predictions together, the total number of correctlypredicted contacts in the Hard proteins will increase by morethan 60% (or 70% for the long-range contact with a sequenceseparation 24), compared with SVM-LOMETS alone. The advantageof SVM-SEQ is also shown in the CASP7 free modeling targetswhere the SVM-SEQ is around four times more accurate than SVM-LOMETSin the long-range contact prediction. These data demonstratethat the state-of-the-art sequence-based contact predictionhas reached a level which may be helpful in assisting tertiarystructure modeling for the targets which do not have close structuretemplates. The maximum yield should be obtained by the combinationof both sequence- and template-based predictions. Contact: yzhang{at}ku.edu Supplementary information: Supplementary data are availableat Bioinformatics online. Associate Editor: Anna Tramontano  相似文献   

15.
Motivation: Representations of the genome can be generated bythe selection of a subpopulation of restriction fragments usingligation-mediated PCR. Such representations form the basis fora number of high-throughput assays, including the HELP assayto study cytosine methylation. We find that HELP data analysisis complicated not only by PCR amplification heterogeneity butalso by a complex and variable distribution of cytosine methylation.To address this, we created an analytical pipeline and novelnormalization approach that improves concordance between microarray-deriveddata and single locus validation results, demonstrating thevalue of the analytical approach. A major influence on the PCRamplification is the size of the restriction fragment, requiringa quantile normalization approach that reduces the influenceof fragment length on signal intensity. Here we describe allof the components of the pipeline, which can also be appliedto data derived from other assays based on genomic representations. Contact: jgreally{at}aecom.yu.edu Supplementary information: Supplementary data are availableat Bioinformatics online. Associate Editor: Joaquin Dopazo  相似文献   

16.
Motivation: Finding a good network null model for protein–proteininteraction (PPI) networks is a fundamental issue. Such a modelwould provide insights into the interplay between network structureand biological function as well as into evolution. Also, network(graph) models are used to guide biological experiments anddiscover new biological features. It has been proposed thatgeometric random graphs are a good model for PPI networks. Ina geometric random graph, nodes correspond to uniformly randomlydistributed points in a metric space and edges (links) existbetween pairs of nodes for which the corresponding points inthe metric space are close enough according to some distancenorm. Computational experiments have revealed close matchesbetween key topological properties of PPI networks and geometricrandom graph models. In this work, we push the comparison furtherby exploiting the fact that the geometric property can be testedfor directly. To this end, we develop an algorithm that takesPPI interaction data and embeds proteins into a low-dimensionalEuclidean space, under the premise that connectivity informationcorresponds to Euclidean proximity, as in geometric-random graphs.We judge the sensitivity and specificity of the fit by computingthe area under the Receiver Operator Characteristic (ROC) curve.The network embedding algorithm is based on multi-dimensionalscaling, with the square root of the path length in a networkplaying the role of the Euclidean distance in the Euclideanspace. The algorithm exploits sparsity for computational efficiency,and requires only a few sparse matrix multiplications, givinga complexity of O(N2) where N is the number of proteins. Results: The algorithm has been verified in the sense that itsuccessfully rediscovers the geometric structure in artificiallyconstructed geometric networks, even when noise is added byre-wiring some links. Applying the algorithm to 19 publiclyavailable PPI networks of various organisms indicated that:(a) geometric effects are present and (b) two-dimensional Euclideanspace is generally as effective as higher dimensional Euclideanspace for explaining the connectivity. Testing on a high-confidenceyeast data set produced a very strong indication of geometricstructure (area under the ROC curve of 0.89), with this networkbeing essentially indistinguishable from a noisy geometric network.Overall, the results add support to the hypothesis that PPInetworks have a geometric structure. Availability: MATLAB code implementing the algorithm is availableupon request. Contact: natasha{at}ics.uci.edu Associate Editor: Olga Troyanskaya  相似文献   

17.
Summary: Using literature databases one can find not only knownand true relations between processes but also less studied,non-obvious associations. The main problem with discoveringsuch type of relevant biological information is ‘selection’.The ability to distinguish between a true correlation (e.g.between different types of biological processes) and randomchance that this correlation is statistically significant iscrucial for any bio-medical research, literature mining beingno exception. This problem is especially visible when searchingfor information which has not been studied and described inmany publications. Therefore, a novel bio-linguistic statisticalmethod is required, capable of ‘selecting’ truecorrelations, even when they are low-frequency associations.In this article, we present such statistical approach basedon Z-score and implemented in a web-based application ‘e-LiSe’. Availability: The software is available at http://miron.ibb.waw.pl/elise/ Contact: piotr{at}ibb.waw.pl Supplementary information: Supplementary materials are availableat http://miron.ibb.waw.pl/elise/supplementary/ Associate Editor: Alfonso Valencia  相似文献   

18.
Motivation: The proliferation of public data repositories createsa need for meta-analysis methods to efficiently evaluate, integrateand validate related datasets produced by independent groups.A t-based approach has been proposed to integrate effect sizefrom multiple studies by modeling both intra- and between-studyvariation. Recently, a non-parametric ‘rank product’method, which is derived based on biological reasoning of fold-changecriteria, has been applied to directly combine multiple datasetsinto one meta study. Fisher's Inverse 2 method, which only dependson P-values from individual analyses of each dataset, has beenused in a couple of medical studies. While these methods addressthe question from different angles, it is not clear how theycompare with each other. Results: We comparatively evaluate the three methods; t-basedhierarchical modeling, rank products and Fisher's Inverse 2test with P-values from either the t-based or the rank productmethod. A simulation study shows that the rank product method,in general, has higher sensitivity and selectivity than thet-based method in both individual and meta-analysis, especiallyin the setting of small sample size and/or large between-studyvariation. Not surprisingly, Fisher's 2 method highly dependson the method used in the individual analysis. Application toreal datasets demonstrates that meta-analysis achieves morereliable identification than an individual analysis, and rankproducts are more robust in gene ranking, which leads to a muchhigher reproducibility among independent studies. Though t-basedmeta-analysis greatly improves over the individual analysis,it suffers from a potentially large amount of false positiveswhen P-values serve as threshold. We conclude that careful meta-analysisis a powerful tool for integrating multiple array studies. Contact: fxhong{at}jimmy.harvard.edu Supplementary information: Supplementary data are availableat Bioinformatics online. Associate Editor: David Rocke Present address: Department of Biostatistics and ComputationalBiology, Dana-Farber Cancer Institute, Harvard School of PublicHealth, 44 Binney Street, Boston, MA 02115, USA.  相似文献   

19.
GENOME: a rapid coalescent-based whole genome simulator   总被引:1,自引:0,他引:1  
Summary: GENOME proposes a rapid coalescent-based approach tosimulate whole genome data. In addition to features of standardcoalescent simulators, the program allows for recombinationrates to vary along the genome and for flexible population histories.Within small regions, we have evaluated samples simulated byGENOME to verify that GENOME provides the expected LD patternsand frequency spectra. The program can be used to study thesampling properties of any statistic for a whole genome study. Availability: The program and C++ source code are availableonline at http://www.sph.umich.edu/csg/liang/genome/ Contact: lianglim{at}umich.edu Supplementary information: Supplementary data are availableat Bioinformatics online. Associate Editor: Martin Bishop  相似文献   

20.
Motivation: Mass spectrometry (MS), such as the surface-enhancedlaser desorption and ionization time-of-flight (SELDI-TOF) MS,provides a potentially promising proteomic technology for biomarkerdiscovery. An important matter for such a technology to be usedroutinely is its reproducibility. It is of significant interestto develop quantitative measures to evaluate the quality andreliability of different experimental methods. Results: We compare the quality of SELDI-TOF MS data using unfractionated,fractionated plasma samples and abundant protein depletion methodsin terms of the numbers of detected peaks and reliability. Severalstatistical quality-control and quality-assessment techniquesare proposed, including the Graeco–Latin square designfor the sample allocation on a Protein chip, the use of thepairwise Pearson correlation coefficient as the similarity measurebetween the spectra in conjunction with multi-dimensional scaling(MDS) for graphically evaluating similarity of replicates andassessing outlier samples; and the use of the reliability ratiofor evaluating reproducibility. Our results show that the numberof peaks detected is similar among the three sample preparationtechnologies, and the use of the Sigma multi-removal kit doesnot improve peak detection. Fractionation of plasma samplesintroduces more experimental variability. The peaks detectedusing the unfractionated plasma samples have the highest reproducibilityas determined by the reliability ratio. Availability: Our algorithm for assessment of SELDI-TOF experimentquality is available at http://www.biostat.harvard.edu/~xlin Contact: harezlak{at}post.harvard.edu Supplementary information: Supplementary data are availableat Bioinformatics online. Associate Editor: Thomas Lengauer  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号