首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The study of large-scale evolutionary patterns in the fossil record has benefited from a diversity of approaches, including analysis of taxonomic data, ecology, geography, and morphology. Although genealogy is an important component of macroevolution, recent visions of phylogenetic analysis as replacing rather than supplementing other approaches are short-sighted. The ability of traditional Linnaean taxa to document evolutionary patterns is mainly an empirical rather than a theoretical issue, yet the use of these taxa has been dismissed without thorough evaluation of their empirical properties. Phylogenetic analysis can help compensate for some of the fossil record's imperfections. However, the shortcomings of the phylogenetic approach have not been adequately acknowledged, and we still lack a rigorous comparison between the phylogenetic approach and probabilistic approaches based on sampling theory. Important inferences about the history of life based on nongenealogical data have later been corroborated with genealogical and other analyses, suggesting that we risk an enormous loss of knowledge and understanding if we categorically dismiss nonphylogenetic data.  相似文献   

2.
Bos DH  Turner SM  Dewoody JA 《Hereditas》2007,144(6):228-234
The direct sequencing of PCR products from diploid organisms is problematic because of ambiguities associated with phase inference in multi-site heterozygotes. Several molecular methods such as cloning, SSCP, and DGGE have been developed to empirically reduce diploid sequences to their constitutive haploid components, but in theory these empirical approaches can be supplanted by analytical treatment of diploid sequences. Analytical approaches are more desirable than molecular methods because of the added time and expense required to generate molecular data. A variety of analytical methods have been developed to address this issue, but few have been rigorously evaluated with empirical data. Furthermore, they all assume that the sequences under consideration are evolving in a neutral fashion and assume a moderate number of heterozygous sites. Here, we use non-neutral major histocompatibility complex (MHC) sequences comprised of large numbers of heterozygous sites that are under strong balancing selection to evaluate the performance of the popular Bayesian algorithm implemented by the program PHASE. Our results suggest that PHASE performs admirably with non-neutral sequences of moderate length with numerous heterozygous sites typical of MHC class II sequences. We conclude that analytical approaches to haplotype inference have great potential in large-scale population genetic assays, but recommend groundtruthing analytical results using empirical (molecular) approaches at the outset of population-level analyses.  相似文献   

3.
Recent analyses of complete genome sequences have revealed that many genomes have been duplicated in their evolutionary past. Such events have been associated with important biological transitions, major leaps in evolution and adaptive radiations of species. Here, we consider recently developed computational methods to detect such ancient large-scale gene duplication events. Several new approaches have been used to show that large-scale gene duplications are more common than previously thought.  相似文献   

4.
Mangrove forests are highly productive and have large carbon sinks while also providing numerous goods and ecosystem services. However, effective management and conservation of the mangrove forests are often dependent on spatially explicit assessments of the resource. Given the remote and highly dispersed nature of mangroves, estimation of biomass and carbon in mangroves through routine field-based inventories represents a challenging task which is impractical for large-scale planning and assessment. Alternative approaches based on geospatial technologies are needed to support this estimation in large areas. However, spatial data processing and analysis approaches used in this estimation of mangrove biomass and carbon have not been adequately investigated. In this study, we present a spatially explicit analytical framework that integrate remotely sensed data and spatial analyses approaches to support the estimation of mangrove biomass and carbon stock and their spatial patterns in West Africa. Forest canopy height derived from SRTM and ICESat/GLAS data was used to estimate mangrove biomass and carbon in nine West African countries. We developed a geospatial software toolkit that implemented the proposed framework. The spatial analysis framework and software toolkit provide solid support for the estimation and relative comparisons of mangrove-related metrics. While the mean canopy height of mangroves in our study area is 10.2 m, the total biomass and carbon were estimated as 272.56 and 136.28 Tg. Nigeria has the highest total mangrove biomass and carbon in the nine countries, but Cameroon is the country with the largest mean biomass and carbon density. The resulting spatially explicit distributions of mangrove biomass and carbon hold great potential in guiding the strategic planning of large-scale field-based assessment of mangrove forests. This study demonstrates the utility of online geospatial data and spatial analysis as a feasible solution for estimating the distribution of mangrove biomass and carbon at larger or smaller scales.  相似文献   

5.
Huynen MA  Gabaldón T  Snel B 《FEBS letters》2005,579(8):1839-1845
The availability of genome sequences and functional genomics data from multiple species enables us to compare the composition of biomolecular systems like biochemical pathways and protein complexes between species. Here, we review small- and large-scale, "genomics-based" approaches to biomolecular systems variation. In general, caution is required when comparing the results of bioinformatics analyses of genomes or of functional genomics data between species. Limitations to the sensitivity of sequence analysis tools and the noisy nature of genomics data tend to lead to systematic overestimates of the amount of variation. Nevertheless, the results from detailed manual analyses, and of large-scale analyses that filter out systematic biases, point to a large amount of variation in the composition of biomolecular systems. Such observations challenge our understanding of the function of the systems and their individual components and can potentially facilitate the identification and functional characterization of sub-systems within a system. Mapping the inter-species variation of complex biomolecular systems on a phylogenetic species tree allows one to reconstruct their evolution.  相似文献   

6.
7.
As increasingly large amounts of data from genome and other sequencing projects become available, new approaches are needed to determine the functions of the proteins these genes encode. We show how large-scale computational analysis can help to address this challenge by linking functional information to sequence and structural similarities using protein similarity networks. Network analyses using three functionally diverse enzyme superfamilies illustrate the use of these approaches for facile updating and comparison of available structures for a large superfamily, for creation of functional hypotheses for metagenomic sequences, and to summarize the limits of our functional knowledge about even well studied superfamilies.  相似文献   

8.
Genomic deletions have long been known to play a causative role in microdeletion syndromes. Recent whole-genome genetic studies have shown that deletions can increase the risk for several psychiatric disorders, suggesting that genomic deletions play an important role in the genetic basis of complex traits. However, the association between genomic deletions and common, complex diseases has not yet been systematically investigated in gene mapping studies. Likelihood-based statistical methods for identifying disease-associated deletions have recently been developed for familial studies of parent-offspring trios. The purpose of this study is to develop statistical approaches for detecting genomic deletions associated with complex disease in case–control studies. Our methods are designed to be used with dense single nucleotide polymorphism (SNP) genotypes to detect deletions in large-scale or whole-genome genetic studies. As more and more SNP genotype data for genome-wide association studies become available, development of sophisticated statistical approaches will be needed that use these data. Our proposed statistical methods are designed to be used in SNP-by-SNP analyses and in cluster analyses based on combined evidence from multiple SNPs. We found that these methods are useful for detecting disease-associated deletions and are robust in the presence of linkage disequilibrium using simulated SNP data sets. Furthermore, we applied the proposed statistical methods to SNP genotype data of chromosome 6p for 868 rheumatoid arthritis patients and 1,197 controls from the North American Rheumatoid Arthritis Consortium. We detected disease-associated deletions within the region of human leukocyte antigen in which genomic deletions were previously discovered in rheumatoid arthritis patients.  相似文献   

9.

Background  

Since the introduction of large-scale genotyping methods that can be utilized in genome-wide association (GWA) studies for deciphering complex diseases, statistical genetics has been posed with a tremendous challenge of how to most appropriately analyze such data. A plethora of advanced model-based methods for genetic mapping of traits has been available for more than 10 years in animal and plant breeding. However, most such methods are computationally intractable in the context of genome-wide studies. Therefore, it is hardly surprising that GWA analyses have in practice been dominated by simple statistical tests concerned with a single marker locus at a time, while the more advanced approaches have appeared only relatively recently in the biomedical and statistical literature.  相似文献   

10.
The Arabidopsis proline-rich extensin-like receptor kinase (PERK) family consists of 15 predicted receptor kinases. A comprehensive expression analysis was undertaken to identify overlapping and unique expression patterns within this family relative to their phylogeny. Three different approaches were used to study AtPERK gene family expression, and included analyses of the EST, MPSS and NASCArrays databases as well as experimental RNA blot analyses. Some of the AtPERK members were identified as tissue-specific genes while others were more broadly expressed. While in some cases there was a good association between these different expression patterns and the position of the AtPERK members in the kinase phylogeny, in other cases divergence of expression patterns was seen. The PERK expression data identified by the bioinformatics and experimental approaches were found generally to show similar trends and supported the use of data from large-scale expression studies for obtaining preliminary expression data. Thus, the bioinformatics survey for ESTs and microarrays is a powerful comprehensive approach for obtaining a genome-wide view of genes in a multigene family.  相似文献   

11.
A natural shift is taking place in the approaches being adopted by plant scientists in response to the accessibility of systems-based technology platforms. Metabolomics is one such field, which involves a comprehensive non-biased analysis of metabolites in a given cell at a specific time. This review briefly introduces the emerging field and a range of analytical techniques that are most useful in metabolomics when combined with computational approaches in data analyses. Using cases from Arabidopsis and other selected plant systems, this review highlights how information can be integrated from metabolomics and other functional genomics platforms to obtain a global picture of plant cellular responses. We discuss how metabolomics is enabling large-scale and parallel interrogation of cell states under different stages of development and defined environmental conditions to uncover novel interactions among various pathways. Finally, we discuss selected applications of metabolomics. This special review article is dedicated to the commemoration of the retirement of Dr. Oluf L. Gamborg after 25 years of service as Founding Managing Editor of Plant Cell Reports. RB and KN have contributed equally to this review.  相似文献   

12.
13.
The traditional approach to bioinformatics analyses relies onindependent task-specific services and applications, using differentinput and output formats, often idiosyncratic, and frequentlynot designed to inter-operate. In general, such analyses wereperformed by experts who manually verified the results obtainedat each step in the process. Today, the amount of bioinformaticsinformation continuously being produced means that handlingthe various applications used to study this information presentsa major data management and analysis challenge to researchers.It is now impossible to manually analyse all this informationand new approaches are needed that are capable of processingthe large-scale heterogeneous data in order to extract the pertinentinformation. We review the recent use of integrated expert systemsaimed at providing more efficient knowledge extraction for bioinformaticsresearch. A general methodology for building knowledge-basedexpert systems is described, focusing on the unstructured informationmanagement architecture, UIMA, which provides facilities forboth data and process management. A case study involving a multiplealignment expert system prototype called AlexSys is also presented.   相似文献   

14.

Background  

The neural retina is a highly structured tissue of the central nervous system that is formed by seven different cell types that are arranged in layers. Despite much effort, the genetic mechanisms that underlie retinal development are still poorly understood. In recent years, large-scale genomic analyses have identified candidate genes that may play a role in retinal neurogenesis, axon guidance and other key processes during the development of the visual system. Thus, new and rapid techniques are now required to carry out high-throughput analyses of all these candidate genes in mammals. Gene delivery techniques have been described to express exogenous proteins in the retina of newborn mice but these approaches do not efficiently introduce genes into the only retinal cell type that transmits visual information to the brain, the retinal ganglion cells (RGCs).  相似文献   

15.
The study of protein-protein interactions (PPIs) is essential to uncover unknown functions of proteins at the molecular level and to gain insight into complex cellular networks. Affinity purification and mass spectrometry (AP-MS), yeast two-hybrid, imaging approaches and numerous diverse databases have been developed as strategies to analyze PPIs. The past decade has seen an increase in the number of identified proteins with the development of MS and large-scale proteome analyses. Consequently, the false-positive protein identification rate has also increased. Therefore, the general consensus is to confirm PPI data using one or more independent approaches for an accurate evaluation. Furthermore, identifying minor PPIs is fundamental for understanding the functions of transient interactions and low-abundance proteins. Besides establishing PPI methodologies, we are now seeing the development of new methods and/or improvements in existing methods, which involve identifying minor proteins by MS, multidimensional protein identification technology or OFFGEL electrophoresis analyses, one-shot analysis with a long column or filter-aided sample preparation methods. These advanced techniques should allow thousands of proteins to be identified, whereas in-depth proteomic methods should permit the identification of transient binding or PPIs with weak affinity. Here, the current status of PPI analysis is reviewed and some advanced techniques are discussed briefly along with future challenges for plant proteomics.  相似文献   

16.
Braun P 《Proteomics》2012,12(10):1499-1518
Protein interactions mediate essentially all biological processes and analysis of protein-protein interactions using both large-scale and small-scale approaches has contributed fundamental insights to the understanding of biological systems. In recent years, interactome network maps have emerged as an important tool for analyzing and interpreting genetic data of complex phenotypes. Complementary experimental approaches to test for binary, direct interactions, and for membership in protein complexes are used to explore the interactome. The two approaches are not redundant but yield orthogonal perspectives onto the complex network of physical interactions by which proteins mediate biological processes. In recent years, several publications have demonstrated that interactions from high-throughput experiments can be equally reliable as the high quality subset of interactions identified in small-scale studies. Critical for this insight was the introduction of standardized experimental benchmarking of interaction and validation assays using reference sets. The data obtained in these benchmarking experiments have resulted in greater appreciation of the limitations and the complementary strengths of different assays. Moreover, benchmarking is a central element of a conceptual framework to estimate interactome sizes and thereby measure progress toward near complete network maps. These estimates have revealed that current large-scale data sets, although often of high quality, cover only a small fraction of a given interactome. Here, I review the findings of assay benchmarking and discuss implications for quality control, and for strategies toward obtaining a near-complete map of the interactome of an organism.  相似文献   

17.
Recently, large-scale experiments have provided new insights into the complex protein interaction network in yeast. However, previous analyses have shown that the number of interacting pairs that are common to different methods is extremely low and, therefore, less informative than expected. In this article, we show that comparing the connectivities of individual proteins can reveal that a common tendency between methods has been missed by the pairwise comparison of interactions. We found significant correlations between experimental methods and also between various in silico methods. Exceptionally, a computational method, gene neighbourhood, correlates with both in silico and experimental approaches.  相似文献   

18.
Despite numerous large-scale phylogenomic studies, certain parts of the mammalian tree are extraordinarily difficult to resolve. We used the coding regions from 19 completely sequenced genomes to study the relationships within the super-clade Euarchontoglires (Primates, Rodentia, Lagomorpha, Dermoptera and Scandentia) because the placement of Scandentia within this clade is controversial. The difficulty in resolving this issue is due to the short time spans between the early divergences of Euarchontoglires, which may cause incongruent gene trees. The conflict in the data can be depicted by network analyses and the contentious relationships are best reconstructed by coalescent-based analyses. This method is expected to be superior to analyses of concatenated data in reconstructing a species tree from numerous gene trees. The total concatenated dataset used to study the relationships in this group comprises 5,875 protein-coding genes (9,799,170 nucleotides) from all orders except Dermoptera (flying lemurs). Reconstruction of the species tree from 1,006 gene trees using coalescent models placed Scandentia as sister group to the primates, which is in agreement with maximum likelihood analyses of concatenated nucleotide sequence data. Additionally, both analytical approaches favoured the Tarsier to be sister taxon to Anthropoidea, thus belonging to the Haplorrhine clade. When divergence times are short such as in radiations over periods of a few million years, even genome scale analyses struggle to resolve phylogenetic relationships. On these short branches processes such as incomplete lineage sorting and possibly hybridization occur and make it preferable to base phylogenomic analyses on coalescent methods.  相似文献   

19.
Conceptual and logistical challenges associated with the design and analysis of ecological restoration experiments are often viewed as being insurmountable, thereby limiting the potential value of restoration experiments as tests of ecological theory. Such research constraints are, however, not unique within the environmental sciences. Numerous natural and anthropogenic disturbances represent unplanned, uncontrollable events that cannot be replicated or studied using traditional experimental approaches and statistical analyses. A broad mix of appropriate research approaches (e.g., long-term studies, large-scale comparative studies, space-for-time substitution, modeling, and focused experimentation) and analytical tools (e.g., observational, spatial, and temporal statistics) are available and required to advance restoration ecology as a scientific discipline. In this article, research design and analytical options are described and assessed in relation to their applicability to restoration ecology. Significant research benefits may be derived from explicitly defining conceptual models and presuppositions, developing multiple working hypotheses, and developing and archiving high-quality data and metadata. Flexibility in research approaches and statistical analyses, high-quality databases, and new sampling approaches that support research at broader spatial and temporal scales are critical for enhancing ecological understanding and supporting further development of restoration ecology as a scientific discipline.  相似文献   

20.

Background  

New approaches are needed for large-scale predictive modeling of cellular signaling networks. While mass action and enzyme kinetic approaches require extensive biochemical data, current logic-based approaches are used primarily for qualitative predictions and have lacked direct quantitative comparison with biochemical models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号