首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 8 毫秒
1.
Integrating concepts of maintenance and of origins is essential to explaining biological diversity. The unified theory of evolution attempts to find a common theme linking production rules inherent in biological systems, explaining the origin of biological order as a manifestation of the flow of energy and the flow of information on various spatial and temporal scales, with the recognition that natural selection is an evolutionarily relevant process. Biological systems persist in space and time by transfor ming energy from one state to another in a manner that generates structures which allows the system to continue to persist. Two classes of energetic transformations allow this; heat-generating transformations, resulting in a net loss of energy from the system, and conservative transformations, changing unusable energy into states that can be stored and used subsequently. All conservative transformations in biological systems are coupled with heat-generating transformations; hence, inherent biological production, or genealogical proesses, is positively entropic. There is a self-organizing phenomenology common to genealogical phenomena, which imparts an arrow of time to biological systems. Natural selection, which by itself is time-reversible, contributes to the organization of the self-organized genealogical trajectories. The interplay of genealogical (diversity-promoting) and selective (diversity-limiting) processes produces biological order to which the primary contribution is genealogical history. Dynamic changes occuring on times scales shorter than speciation rates are microevolutionary; those occuring on time scales longer than speciation rates are macroevolutionary. Macroevolutionary processes are neither redicible to, nor autonomous from, microevolutionary processes.Authorship alphabetical  相似文献   

2.
3.
One of the main goals in proteomics is to solve biological and molecular questions regarding a set of identified proteins. In order to achieve this goal, one has to extract and collect the existing biological data from public repositories for every protein and afterward, analyze and organize the collected data. Due to the complexity of this task and the huge amount of data available, it is not possible to gather this information by hand, making it necessary to find automatic methods of data collection. Within a proteomic context, we have developed Protein Information and Knowledge Extractor (PIKE) which solves this problem by automatically accessing several public information systems and databases across the Internet. PIKE bioinformatics tool starts with a set of identified proteins, listed as the most common protein databases accession codes, and retrieves all relevant and updated information from the most relevant databases. Once the search is complete, PIKE summarizes the information for every single protein using several file formats that share and exchange the information with other software tools. It is our opinion that PIKE represents a great step forward for information procurement and drastically reduces manual database validation for large proteomic studies. It is available at http://proteo.cnb.csic.es/pike .  相似文献   

4.
The intermediary steps between a biological hypothesis, concretized in the input data, and meaningful results, validated using biological experiments, commonly employ bioinformatics tools. Starting with storage of the data and ending with a statistical analysis of the significance of the results, every step in a bioinformatics analysis has been intensively studied and the resulting methods and models patented. This review summarizes the bioinformatics patents that have been developed mainly for the study of genes, and points out the universal applicability of bioinformatics methods to other related studies such as RNA interference. More specifically, we overview the steps undertaken in the majority of bioinformatics analyses, highlighting, for each, various approaches that have been developed to reveal details from different perspectives. First we consider data warehousing, the first task that has to be performed efficiently, optimizing the structure of the database, in order to facilitate both the subsequent steps and the retrieval of information. Next, we review data mining, which occupies the central part of most bioinformatics analyses, presenting patents concerning differential expression, unsupervised and supervised learning. Last, we discuss how networks of interactions of genes or other players in the cell may be created, which help draw biological conclusions and have been described in several patents.  相似文献   

5.
6.
Bioinformatics is an integral aspect of plant and crop science research. Developments in data management and analytical software are reviewed with an emphasis on applications in functional genomics. This includes information resources for Arabidopsis and crop species, and tools available for analysis and visualisation of comparative genomic data. Approaches used to explore relationships between plant genes and expressed sequences are compared, including use of ontologies. The impact of bioinformatics in forward and reverse genetics is described, together with the potential from data mining. The role of bioinformatics is explored in the wider context of plant and crop science.  相似文献   

7.
This essay provides an introduction to the terminology, concepts, methods, and challenges of image‐based modeling in biology. Image‐based modeling and simulation aims at using systematic, quantitative image data to build predictive models of biological systems that can be simulated with a computer. This allows one to disentangle molecular mechanisms from effects of shape and geometry. Questions like “what is the functional role of shape” or “how are biological shapes generated and regulated” can be addressed in the framework of image‐based systems biology. The combination of image quantification, model building, and computer simulation is illustrated here using the example of diffusion in the endoplasmic reticulum.  相似文献   

8.
9.
10.
11.
Genomic and proteomic analyses generate a massive amount of data that requires specific bioinformatic tools for its management and interpretation. GARBAN II, developed from the previous GARBAN platform, provides an integrated framework to simultaneously analyse and compare multiple datasets from DNA microarrays and proteomic studies. The general architecture, gene classification and comparison, and graphical representation have been redesigned to ensure a user-friendly feature and to improve the capabilities and efficiency of this system. Additionally, GARBAN II has been extended with new applications to display networks of coexpressed genes and to integrate access to BioRag and MotifScanner so as to facilitate the holistic analysis of users' data.  相似文献   

12.
13.
Modern 'omics'-technologies result in huge amounts of data about life processes. For analysis and data mining purposes this data has to be considered in the context of the underlying biological networks. This work presents an approach for integrating data from biological experiments into metabolic networks by mapping the data onto network elements and visualising the data enriched networks automatically. This methodology is implemented in DBE, an information system that supports the analysis and visualisation of experimental data in the context of metabolic networks. It consists of five parts: (1) the DBE-Database for consistent data storage, (2) the Excel-Importer application for the data import, (3) the DBE-Website as the interface for the system, (4) the DBE-Pictures application for the up- and download of binary (e. g. image) files, and (5) DBE-Gravisto, a network analysis and graph visualisation system. The usability of this approach is demonstrated in two examples.  相似文献   

14.
Fueled by novel technologies capable of producing massive amounts of data for a single experiment, scientists are faced with an explosion of information which must be rapidly analyzed and combined with other data to form hypotheses and create knowledge. Today, numerous biological questions can be answered without entering a wet lab. Scientific protocols designed to answer these questions can be run entirely on a computer. Biological resources are often complementary, focused on different objects and reflecting various experts' points of view. Exploiting the richness and diversity of these resources is crucial for scientists. However, with the increase of resources, scientists have to face the problem of selecting sources and tools when interpreting their data. In this paper, we analyze the way in which biologists express and implement scientific protocols, and we identify the requirements for a system which can guide scientists in constructing protocols to answer new biological questions. We present two such systems, BioNavigation and BioGuide dedicated to help scientists select resources by following suitable paths within the growing network of interconnected biological resources.  相似文献   

15.
16.
17.
Collecting and organizing systematic sets of protein data   总被引:3,自引:0,他引:3  
Systems biology, particularly of mammalian cells, is data starved. However, technologies are now in place to obtain rich data, in a form suitable for model construction and validation, that describes the activities, states and locations of cell-signalling molecules. The key is to use several measurement technologies simultaneously and, recognizing each of their limits, to assemble a self-consistent compendium of systematic data.  相似文献   

18.
This paper gives a brief survey of the use of algebraic rewriting systems for modelling and simulating various biological processes, particularly at the cellular level.  相似文献   

19.
OBJECTIVE--To collect a valid, complete, continuous, and representative database of morbidity presenting to primary care and to use the data to help commission services on the basis of local need and effectiveness. SETTING--Computerised general practices in Somerset. METHODS--Participating general practices were selected to be representative of the district health authority population for general practice and population characteristics. All conditions presented at face to face consultations were assigned a Read code and episode type and the data were regularly validated. Data were sent by modem from the practices via a third party to the health authority each week. MAIN OUTCOME MEASURES--Proportion of consultations coded and accuracy of coding. RESULTS--11 practices agreed to participate. Validations for completeness during April 1994 to March 1995 revealed that 96.4% of the records were coded; 94% of the 1090 records validated had appropriate episode types and 87% appropriate Read codes. The results have been used to help formulate the health authority''s purchasing plans and have enabled a change in the local contracts for surgery for glue ear. CONCLUSIONS--The project has shown the feasibility of establishing a network of practices recording and reporting the morbidity seen in primary care. Early indications are that the data can be useful in evidence based purchasing.  相似文献   

20.
The International Journal of Life Cycle Assessment - Due to population growth, urban water demand is expected to increase significantly, as well as the environmental and economic costs required to...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号