首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Genomic and proteomic analyses generate a massive amount of data that requires specific bioinformatic tools for its management and interpretation. GARBAN II, developed from the previous GARBAN platform, provides an integrated framework to simultaneously analyse and compare multiple datasets from DNA microarrays and proteomic studies. The general architecture, gene classification and comparison, and graphical representation have been redesigned to ensure a user-friendly feature and to improve the capabilities and efficiency of this system. Additionally, GARBAN II has been extended with new applications to display networks of coexpressed genes and to integrate access to BioRag and MotifScanner so as to facilitate the holistic analysis of users' data.  相似文献   

2.
Nowadays we are experiencing a remarkable growth in the number of databases that have become accessible over the Web. However, in a certain number of cases, for example, in the case of BioImage, this information is not of a textual nature, thus posing new challenges in the design of tools to handle these data. In this work, we concentrate on the development of new mechanisms aimed at "querying" these databases of complex data sets by their intrinsic content, rather than by their textual annotations only. We concentrate our efforts on a subset of BioImage containing 3D images (volumes) of biological macromolecules, implementing a first prototype of a "query-by-content" system. In the context of databases of complex data types the term query-by-content makes reference to those data modeling techniques in which user-defined functions aim at "understanding" (to some extent) the informational content of the data sets. In these systems the matching criteria introduced by the user are related to intrinsic features concerning the 3D images themselves, hence, complementing traditional queries by textual key words only. Efficient computational algorithms are required in order to "extract" structural information of the 3D images prior to storing them in the database. Also, easy-to-use interfaces should be implemented in order to obtain feedback from the expert. Our query-by-content prototype is used to construct a concrete query, making use of basic structural features, which are then evaluated over a set of three-dimensional images of biological macromolecules. This experimental implementation can be accessed via the Web at the BioImage server in Madrid, at http://www.bioimage.org/qbc/index.html.  相似文献   

3.
A visual model for object detection is proposed. In order to make the detection ability comparable with existing technical methods for object detection, an evolution equation of neurons in the model is derived from the computational principle of active contours. The hierarchical structure of the model emerges naturally from the evolution equation. One drawback involved with initial values of active contours is alleviated by introducing and formulating convexity, which is a visual property. Numerical experiments show that the proposed model detects objects with complex topologies and that it is tolerant of noise. A visual attention model is introduced into the proposed model. Other simulations show that the visual properties of the model are consistent with the results of psychological experiments that disclose the relation between figure–ground reversal and visual attention. We also demonstrate that the model tends to perceive smaller regions as figures, which is a characteristic observed in human visual perception.This work was partially supported by Grants-in-Aid for Scientific Research (#14780254) from Japan Society of Promotion of Science.  相似文献   

4.

Background  

The amount of available biological information is rapidly increasing and the focus of biological research has moved from single components to networks and even larger projects aiming at the analysis, modelling and simulation of biological networks as well as large scale comparison of cellular properties. It is therefore essential that biological knowledge is easily accessible. However, most information is contained in the written literature in an unstructured way, so that methods for the systematic extraction of knowledge directly from the primary literature have to be deployed.  相似文献   

5.
The magnitude and urgency of the biodiversity crisis is widely recognized within scientific and political organizations. However, a lack of integrated measures for biodiversity has greatly constrained the national and international response to the biodiversity crisis. Thus, integrated biodiversity indexes will greatly facilitate information transfer from science toward other areas of human society. The Nature Index framework samples scientific information on biodiversity from a variety of sources, synthesizes this information, and then transmits it in a simplified form to environmental managers, policymakers, and the public. The Nature Index optimizes information use by incorporating expert judgment, monitoring-based estimates, and model-based estimates. The index relies on a network of scientific experts, each of whom is responsible for one or more biodiversity indicators. The resulting set of indicators is supposed to represent the best available knowledge on the state of biodiversity and ecosystems in any given area. The value of each indicator is scaled relative to a reference state, i.e., a predicted value assessed by each expert for a hypothetical undisturbed or sustainably managed ecosystem. Scaled indicator values can be aggregated or disaggregated over different axes representing spatiotemporal dimensions or thematic groups. A range of scaling models can be applied to allow for different ways of interpreting the reference states, e.g., optimal situations or minimum sustainable levels. Statistical testing for differences in space or time can be implemented using Monte-Carlo simulations. This study presents the Nature Index framework and details its implementation in Norway. The results suggest that the framework is a functional, efficient, and pragmatic approach for gathering and synthesizing scientific knowledge on the state of biodiversity in any marine or terrestrial ecosystem and has general applicability worldwide.  相似文献   

6.
7.
The number of solved structures of macromolecules that have the same fold and thus exhibit some degree of conformational variability is rapidly increasing. It is consequently advantageous to develop a standardized terminology for describing this variability and automated systems for processing protein structures in different conformations. We have developed such a system as a 'front-end' server to our database of macromolecular motions. Our system attempts to describe a protein motion as a rigid-body rotation of a small 'core' relative to a larger one, using a set of hinges. The motion is placed in a standardized coordinate system so that all statistics between any two motions are directly comparable. We find that while this model can accommodate most protein motions, it cannot accommodate all; the degree to which a motion can be accommodated provides an aid in classifying it. Furthermore, we perform an adiabatic mapping (a restrained interpolation) between every two conformations. This gives some indication of the extent of the energetic barriers that need to be surmounted in the motion, and as a by-product results in a 'morph movie'. We make these movies available over the Web to aid in visualization. Many instances of conformational variability occur between proteins with somewhat different sequences. We can accommodate these differences in a rough fashion, generating an 'evolutionary morph'. Users have already submitted hundreds of examples of protein motions to our server, producing a comprehensive set of statistics. So far the statistics show that the median submitted motion has a rotation of approximately 10 degrees and a maximum Calpha displacement of 17 A. Almost all involve at least one large torsion angle change of >140 degrees. The server is accessible at http://bioinfo.mbb.yale. edu/MolMovDB  相似文献   

8.
Special methods are required for computing information on biological objects under complex research. The DBASE3-PLUS system offers vast possibilities for working with large quantity of different sets of information in multi-aspect statistical analysis. Usefulness of this system for creation of and operation with the data on distribution of genetic and non-genetic traits in a population was shown by means of the special set of applied programs in the dBase language. An example presented is aimed at distinguishing those genetic markers which probably can influence common individual health of the members of population. Special regression procedure was suggested to divide the population sample into subgroups with different health levels. Significant differences in distribution of some genetic markers were demonstrated between healthy persons and those who were suffering from chronic diseases.  相似文献   

9.

Background  

New technologies are enabling the measurement of many types of genomic and epigenomic information at scales ranging from the atomic to nuclear. Much of this new data is increasingly structural in nature, and is often difficult to coordinate with other data sets. There is a legitimate need for integrating and visualizing these disparate data sets to reveal structural relationships not apparent when looking at these data in isolation.  相似文献   

10.
Symbolic dynamics is a powerful tool for studying complex dynamical systems. So far many techniques of this kind have been proposed as a means to analyze brain dynamics, but most of them are restricted to single-sensor measurements. Analyzing the dynamics in a channel-wise fashion is an invalid approach for multisite encephalographic recordings, since it ignores any pattern of coordinated activity that might emerge from the coherent activation of distinct brain areas. We suggest, here, the use of neural-gas algorithm (Martinez et al. in IEEE Trans Neural Netw 4:558–569, 1993) for encoding brain activity spatiotemporal dynamics in the form of a symbolic timeseries. A codebook of k prototypes, best representing the instantaneous multichannel data, is first designed. Each pattern of activity is then assigned to the most similar code vector. The symbolic timeseries derived in this way is mapped to a network, the topology of which encapsulates the most important phase transitions of the underlying dynamical system. Finally, global efficiency is used to characterize the obtained topology. We demonstrate the approach by applying it to EEG-data recorded from subjects while performing mental calculations. By working in a contrastive-fashion, and focusing in the phase aspects of the signals, we show that the underlying dynamics differ significantly in their symbolic representations.  相似文献   

11.
Surveillance screening at scale to identify people infected by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) prior to extensive transmission is key to bringing an end to the coronavirus disease 2019 (COVID-19) pandemic, even though vaccinations have already begun. Here we describe Corona Detective, a sensitive and rapid molecular test to detect the virus, based on loop-mediated isothermal amplification, which could be applied anywhere at low cost. Critically, the method uses freeze-dried reagents, readily shipped without cold-chain dependence. The reaction detects the viral nucleocapsid gene through a sequence-specific quenched-fluorescence readout, which avoids false positives and also allows multiplex detection with an internal control cellular RNA. Corona Detective can be used in 8-tube strips to be read with a simple open-design fluorescence detector. Other methods to use and produce Corona Detective locally in a variety of formats are possible and already openly shared. Detection specificity is ensured through inclusion of positive and negative control reactions to run in parallel with the diagnostic reactions. A simple user protocol, including sample preparation, and a bioinformatics pipeline to ensure that viral variants will still be detectable with SARS-CoV-2 primer sets complete the method. Through rapid production and distribution of Corona Detective reactions, quite inexpensive at scale, daily or weekly surveillance testing of large populations, without waiting for symptoms to develop, is anticipated, in combination with vaccination campaigns, to finally control this pandemic.  相似文献   

12.
The Mouse Disease Information System (MoDIS) is a data capture system for pathology data from laboratory mice designed to support phenotyping studies. The system integrates the mouse anatomy (MA) and mouse pathology (MPATH) ontologies into a Microsoft Access database facilitating the coding of organ, tissue, and disease process to recognized semantic standards. Grading of disease severity provides scores for all lesions that can then be used for quantitative trait locus (QTL) analyses and haplotype association gene mapping. Direct linkage to the Pathbase online database provides reference definitions for disease terms and access to photomicrographic images of similar diagnoses in other mutant mice. MoDIS is an open source and freely available program (). This provides a valuable tool for setting up a mouse pathology phenotyping program.  相似文献   

13.

Background  

The amount of information stemming from proteomics experiments involving (multi dimensional) separation techniques, mass spectrometric analysis, and computational analysis is ever-increasing. Data from such an experimental workflow needs to be captured, related and analyzed. Biological experiments within this scope produce heterogenic data ranging from pictures of one or two-dimensional protein maps and spectra recorded by tandem mass spectrometry to text-based identifications made by algorithms which analyze these spectra. Additionally, peptide and corresponding protein information needs to be displayed.  相似文献   

14.
Swallowing depends on physiological variables that have a decisive influence on the swallowing capacity and on the tracheal stress distribution. Prosthetic implantation modifies these values and the overall performance of the trachea. The objective of this work was to develop a decision support system based on experimental, numerical and statistical approaches, with clinical verification, to help the thoracic surgeon in deciding the position and appropriate dimensions of a Dumon prosthesis for a specific patient in an optimal time and with sufficient robustness. A code for mesh adaptation to any tracheal geometry was implemented and used to develop a robust experimental design, based on the Taguchi's method and the analysis of variance. This design was able to establish the main swallowing influencing factors. The equations to fit the stress and the vertical displacement distributions were obtained. The resulting fitted values were compared to those calculated directly by the finite element method (FEM). Finally, a checking and clinical validation of the statistical study were made, by studying two cases of real patients. The vertical displacements and principal stress distribution obtained for the specific tracheal model were in agreement with those calculated by FE simulations with a maximum absolute error of 1.2 mm and 0.17 MPa, respectively. It was concluded that the resulting decision support tool provides a fast, accurate and simple tool for the thoracic surgeon to predict the stress state of the trachea and the reduction in the ability to swallow after implantation. Thus, it will help them in taking decisions during pre-operative planning of tracheal interventions.  相似文献   

15.
Many animal health, welfare and food safety databases include data on clinical and test-based disease diagnoses. However, the circumstances and constraints for establishing the diagnoses vary considerably among databases. Therefore results based on different databases are difficult to compare and compilation of data in order to perform meta-analysis is almost impossible. Nevertheless, diagnostic information collected either routinely or in research projects is valuable in cross comparisons between databases, but there is a need for improved transparency and documentation of the data and the performance characteristics of tests used to establish diagnoses. The objective of this paper is to outline the circumstances and constraints for recording of disease diagnoses in different types of databases, and to discuss these in the context of disease diagnoses when using them for additional purposes, including research. Finally some limitations and recommendations for use of data and for recording of diagnostic information in the future are given. It is concluded that many research questions have such a specific objective that investigators need to collect their own data. However, there are also examples, where a minimal amount of extra information or continued validation could make sufficient improvement of secondary data to be used for other purposes. Regardless, researchers should always carefully evaluate the opportunities and constraints when they decide to use secondary data. If the data in the existing databases are not sufficiently valid, researchers may have to collect their own data, but improved recording of diagnostic data may improve the usefulness of secondary diagnostic data in the future.  相似文献   

16.

Background  

Until today, analysis of 16S ribosomal RNA (rRNA) sequences has been the de-facto gold standard for the assessment of phylogenetic relationships among prokaryotes. However, the branching order of the individual phlya is not well-resolved in 16S rRNA-based trees. In search of an improvement, new phylogenetic methods have been developed alongside with the growing availability of complete genome sequences. Unfortunately, only a few genes in prokaryotic genomes qualify as universal phylogenetic markers and almost all of them have a lower information content than the 16S rRNA gene. Therefore, emphasis has been placed on methods that are based on multiple genes or even entire genomes. The concatenation of ribosomal protein sequences is one method which has been ascribed an improved resolution. Since there is neither a comprehensive database for ribosomal protein sequences nor a tool that assists in sequence retrieval and generation of respective input files for phylogenetic reconstruction programs, RibAlign has been developed to fill this gap.  相似文献   

17.
18.
19.
In the past years, identification of alternative splicing (AS) variants has been gaining momentum. We developed AVATAR, a database for documenting AS using 5,469,433 human EST sequences and 26,159 human mRNA sequences. AVATAR contains 12000 alternative splicing sites identified by mapping ESTs and mRNAs with the whole human genome sequence. AVATAR also contains AS information for 6 eukaryotes. We mapped EST alignment information into a graph model where exons and introns are represented with vertices and edges, respectively. AVATAR can be queried using, (1) gene names, (2) number of identified AS events in a gene, (3) minimal number of ESTs supporting a splicing site, etc. as search parameters. The system provides visualized AS information for queried genes.

Availability  相似文献   


20.

Background  

Inferring gene regulatory networks from data requires the development of algorithms devoted to structure extraction. When only static data are available, gene interactions may be modelled by a Bayesian Network (BN) that represents the presence of direct interactions from regulators to regulees by conditional probability distributions. We used enhanced evolutionary algorithms to stochastically evolve a set of candidate BN structures and found the model that best fits data without prior knowledge.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号