首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
3.
A wide variety of software tools are available to analyze microarray data. To identify the optimum software for any project, it is essential to define specific and essential criteria on which to evaluate the advantages of the key features. In this review we describe the results of our comparison of several software tools. We then conclude with a discussion of the subset of tools that are most commonly used and describe the features that would constitute the “ideal microarray analysis software suite.”  相似文献   

4.
The analysis of known protein structures is a very valuable and indispensable tool for deciphering the complex rules relating sequence to structure in proteins. On the other hand, the design of novel proteins is certainly the most severe test of our understanding of such rules. In this report we describe our own attempt to develop appropriate tools for the investigation of known protein structure properties and their applications to the design of a novel, all β protein. The success of the design project is a demonstration of the usefulness of careful analysis of the data base of known protein structures. © 1994 John Wiley & Sons, Inc.  相似文献   

5.
Mathematical modelling and computational analysis play an essentialrole in improving our capability to elucidate the functionsand characteristics of complex biological systems such as metabolic,regulatory and cell signalling pathways. The modelling and concomitantsimulation render it possible to predict the cellular behaviourof systems under various genetically and/or environmentallyperturbed conditions. This motivates systems biologists/bioengineers/bioinformaticiansto develop new tools and applications, allowing non-expertsto easily conduct such modelling and analysis. However, amonga multitude of systems biology tools developed to date, onlya handful of projects have adopted a web-based approach to kineticmodelling. In this report, we evaluate the capabilities andcharacteristics of current web-based tools in systems biologyand identify desirable features, limitations and bottlenecksfor further improvements in terms of usability and functionality.A short discussion on software architecture issues involvedin web-based applications and the approaches taken by existingtools is included for those interested in developing their ownsimulation applications.   相似文献   

6.
The wealth of information from various genome sequencing projects provides the biologist with a new perspective from which to analyze, and design experiments with, mammalian systems. The complexity of the information, however, requires new software tools, and numerous such tools are now available. Which type and which specific system is most effective depends, in part, upon how much sequence is to be analyzed and with what level of experimental support. Here we survey a number of mammalian genomic sequence analysis systems with respect to the data they provide and the ease of their use. The hope is to aid the experimental biologist in choosing the most appropriate tool for their analyses.  相似文献   

7.
Social network analysis has long been an untiring topic of sociology. However, until the era of information technology, the availability of data, mainly collected by the traditional method of personal survey, was highly limited and prevented large-scale analysis. Recently, the exploding amount of automatically generated data has completely changed the pattern of research. For instance, the enormous amount of data from so-called high-throughput biological experiments has introduced a systematic or network viewpoint to traditional biology. Then, is “high-throughput” sociological data generation possible? Google, which has become one of the most influential symbols of the new Internet paradigm within the last ten years, might provide torrents of data sources for such study in this (now and forthcoming) digital era. We investigate social networks between people by extracting information on the Web and introduce new tools of analysis of such networks in the context of statistical physics of complex systems or socio-physics. As a concrete and illustrative example, the members of the 109th United States Senate are analyzed and it is demonstrated that the methods of construction and analysis are applicable to various other weighted networks.  相似文献   

8.
Centralisation of tools for analysis of genomic data is paramount in ensuring that research is always carried out on the latest currently available data. As such, World Wide Web sites providing a range of online analyses and displays of data can play a crucial role in guaranteeing consistency of in silico work. In this respect, the protozoan parasite research community is served by several resources, either focussing on data and tools for one species or taking a broader view and providing tools for analysis of data from many species, thereby facilitating comparative studies. In this paper, we give a broad overview of the online resources available. We then focus on the GeneDB project, detailing the features and tools currently available through it. Finally, we discuss data curation and its importance in keeping genomic data 'relevant' to the research community.  相似文献   

9.
ABSTRACT: BACKGROUND: Progress in the modeling of biological systems strongly relies on the availability of specialized computer-aided tools. To that end, the Taverna Workbench eases integration of software tools for life science research and provides a common workflow-based framework for computational experiments in Biology. RESULTS: The Taverna services for Systems Biology (Tav4SB) project provides a set of new Web service operations, which extend the functionality of the Taverna Workbench in a domain of systems biology. Tav4SB operations allow you to perform numerical simulations or model checking of, respectively, deterministic or stochastic semantics of biological models. On top of this functionality, Tav4SB enables the construction of high-level experiments. As an illustration of possibilities offered by our project we apply the multi-parameter sensitivity analysis. To visualize the results of model analysis a flexible plotting operation is provided as well. Tav4SB operations are executed in a simple grid environment, integrating heterogeneous software such as Mathematica, PRISM and SBML ODE Solver. The user guide, contact information, full documentation of available Web service operations, workflows and other additional resources can be found at the Tav4SB project's Web page: http://bioputer.mimuw.edu.pl/tav4sb/. CONCLUSIONS: The Tav4SB Web service provides a set of integrated tools in the domain for which Web-based applications are still not as widely available as for other areas of computational biology. Moreover, we extend the dedicated hardware base for computationally expensive task of simulating cellular models. Finally, we promote the standardization of models and experiments as well as accessibility and usability of remote services.  相似文献   

10.
Mass spectrometry (MS) is a technique that is used for biological studies. It consists in associating a spectrum to a biological sample. A spectrum consists of couples of values (intensity, m/z), where intensity measures the abundance of biomolecules (as proteins) with a mass-to-charge ratio (m/z) present in the originating sample. In proteomics experiments, MS spectra are used to identify pattern expressions in clinical samples that may be responsible of diseases. Recently, to improve the identification of peptides/proteins related to patterns, MS/MS process is used, consisting in performing cascade of mass spectrometric analysis on selected peaks. Latter technique has been demonstrated to improve the identification and quantification of proteins/peptide in samples. Nevertheless, MS analysis deals with a huge amount of data, often affected by noises, thus requiring automatic data management systems. Tools have been developed and most of the time furnished with the instruments allowing: (i) spectra analysis and visualization, (ii) pattern recognition, (iii) protein databases querying, (iv) peptides/proteins quantification and identification. Currently most of the tools supporting such phases need to be optimized to improve the protein (and their functionalities) identification processes. In this article we survey on applications supporting spectrometrists and biologists in obtaining information from biological samples, analyzing available software for different phases. We consider different mass spectrometry techniques, and thus different requirements. We focus on tools for (i) data preprocessing, allowing to prepare results obtained from spectrometers to be analyzed; (ii) spectra analysis, representation and mining, aimed to identify common and/or hidden patterns in spectra sets or in classifying data; (iii) databases querying to identify peptides; and (iv) improving and boosting the identification and quantification of selected peaks. We trace some open problems and report on requirements that represent new challenges for bioinformatics.  相似文献   

11.
《Journal of Physiology》2009,103(6):315-323
The EEG is one of the most commonly used tools in brain research. Though of high relevance in research, the data obtained is very noisy and nonstationary. In the present article we investigate the applicability of a nonlinear data analysis method, the recurrence quantification analysis (RQA), to such data. The method solely rests on the natural property of recurrence which is a phenomenon inherent to complex systems, such as the brain. We show that this method is indeed suitable for the analysis of EEG data and that it might improve contemporary EEG analysis.  相似文献   

12.
Microarray technology has become a standard molecular biology tool. Experimental data have been generated on a huge number of organisms, tissue types, treatment conditions and disease states. The Gene Expression Omnibus (Barrett et al., 2005), developed by the National Center for Bioinformatics (NCBI) at the National Institutes of Health is a repository of nearly 140,000 gene expression experiments. The BioConductor project (Gentleman et al., 2004) is an open-source and open-development software project built in the R statistical programming environment (R Development core Team, 2005) for the analysis and comprehension of genomic data. The tools contained in the BioConductor project represent many state-of-the-art methods for the analysis of microarray and genomics data. We have developed a software tool that allows access to the wealth of information within GEO directly from BioConductor, eliminating many the formatting and parsing problems that have made such analyses labor-intensive in the past. The software, called GEOquery, effectively establishes a bridge between GEO and BioConductor. Easy access to GEO data from BioConductor will likely lead to new analyses of GEO data using novel and rigorous statistical and bioinformatic tools. Facilitating analyses and meta-analyses of microarray data will increase the efficiency with which biologically important conclusions can be drawn from published genomic data. Availability: GEOquery is available as part of the BioConductor project.  相似文献   

13.
The application of novel assays for basic cell research is tightly linked to the development of easy-to-use and versatile tools and protocols for implementing such technologies for a wide range of applications and model species. The bimolecular fluorescence complementation (BiFC) assay is one such novel method for which tools and protocols for its application in plant cell research are still being developed. BiFC is a powerful tool which enables not only detection, but also visualization and subcellular localization of protein–protein interactions in living cells. Here we describe the application of BiFC in plant cells while focusing on the use of our versatile set of vectors which were specifically designed to facilitate the transformation, expression and imaging of protein–protein interactions in various plant species. We discuss the considerations of using our system in various plant model systems, the use of single versus multiple expression cassettes, the application of our vectors using various transformation methods and the use of internal fluorescent markers which can assist in signal localization and easy data acquisition in living cells.  相似文献   

14.
Traditional laboratory experiments, rehabilitation clinics, and wearable sensors offer biomechanists a wealth of data on healthy and pathological movement. To harness the power of these data and make research more efficient, modern machine learning techniques are starting to complement traditional statistical tools. This survey summarizes the current usage of machine learning methods in human movement biomechanics and highlights best practices that will enable critical evaluation of the literature. We carried out a PubMed/Medline database search for original research articles that used machine learning to study movement biomechanics in patients with musculoskeletal and neuromuscular diseases. Most studies that met our inclusion criteria focused on classifying pathological movement, predicting risk of developing a disease, estimating the effect of an intervention, or automatically recognizing activities to facilitate out-of-clinic patient monitoring. We found that research studies build and evaluate models inconsistently, which motivated our discussion of best practices. We provide recommendations for training and evaluating machine learning models and discuss the potential of several underutilized approaches, such as deep learning, to generate new knowledge about human movement. We believe that cross-training biomechanists in data science and a cultural shift toward sharing of data and tools are essential to maximize the impact of biomechanics research.  相似文献   

15.
Modern biomedical research is evolving with the rapid growth of diverse data types, biophysical characterization methods, computational tools and extensive collaboration among researchers spanning various communities and having complementary backgrounds and expertise. Collaborating researchers are increasingly dependent on shared data and tools made available by other investigators with common interests, thus forming communities that transcend the traditional boundaries of the single research laboratory or institution. Barriers, however, remain to the formation of these virtual communities, usually due to the steep learning curve associated with becoming familiar with new tools, or with the difficulties associated with transferring data between tools. Recognizing the need for shared reference data and analysis tools, we are developing an integrated knowledge environment that supports productive interactions among researchers. Here we report on our current collaborative environment, which focuses on bringing together structural biologists working in the area of mass spectrometric based methods for the analysis of tertiary and quaternary macromolecular structures (MS3D) called the Collaboratory for MS3D (C-MS3D). C-MS3D is a Web-portal designed to provide collaborators with a shared work environment that integrates data storage and management with data analysis tools. Files are stored and archived along with pertinent meta data in such a way as to allow file handling to be tracked (data provenance) and data files to be searched using keywords and modification dates. While at this time the portal is designed around a specific application, the shared work environment is a general approach to building collaborative work groups. The goal of this is to not only provide a common data sharing and archiving system, but also to assist in the building of new collaborations and to spur the development of new tools and technologies.  相似文献   

16.
There are many bioinformatics tools that deal with input/output (I/O) issues by using filing systems from the most common operating systems, such as Linux or MS Windows. However, as data volumes increase, there is a need for more efficient disk access, ad hoc memory management and specific page-replacement policies. We propose a device driver that can be used by multiple applications. It keeps the application code unchanged, providing a non-intrusive and flexible strategy for I/O calls that may be adopted in a straightforward manner. With our approach, database developers can define their own I/O management strategies. We used our device driver to manage Basic Local Alignment Search Tool (BLAST) I/O calls. Based on preliminary experimental results with National Center for Biotechnology Information (NCBI) BLAST, this approach can provide database management systems-like data management features, which may be used for BLAST and many other computational biology applications.  相似文献   

17.
In recent years sympatry networks have been proposed as a mean to perform biogeographic analysis, but their computation posed practical difficulties that limited their use. We propose a novel approach, bringing closer the application of well-established network analysis tools to the study of sympatry patterns using both geographic and environmental data associated with the occurrence of species. Our proposed algorithm, SGraFuLo, combines the use of fuzzy logic and numerical methods to directly compute the network of interest from point locality records, without the need of specialized tools, such as geographic information systems, thereby simplifying the process for end users. By posing the problem in matrix terms, SGraFuLo is able to achieve remarkable efficiency even for large datasets, taking advantage of well established scientific computing algorithms. We present sympatry networks constructed using real-world data collected in Mexico and Central America and highlight the potential of our approach in the analysis of overlapping niches of species that could have important applications even in evolutionary studies. We also present details on the design and implementation of the algorithm, as well as experiments that show its efficiency. The source code is freely released and datasets are also available to support the reproducibility of our results.  相似文献   

18.
Over the last several years, many sequence alignment tools have appeared and become popular for the fast evolution of next generation sequencing technologies. Obviously, researchers that use such tools are interested in getting maximum performance when they execute them in modern infrastructures. Today’s NUMA (Non-uniform memory access) architectures present major challenges in getting such applications to achieve good scalability as more processors/cores are used. The memory system in NUMA systems shows a high complexity and may be the main cause for the loss of an application’s performance. The existence of several memory banks in NUMA systems implies a logical increase in latency associated with the accesses of a given processor to a remote bank. This phenomenon is usually attenuated by the application of strategies that tend to increase the locality of memory accesses. However, NUMA systems may also suffer from contention problems that can occur when concurrent accesses are concentrated on a reduced number of banks. Sequence alignment tools use large data structures to contain reference genomes to which all reads are aligned. Therefore, these tools are very sensitive to performance problems related to the memory system. The main goal of this study is to explore the trade-offs between data locality and data dispersion in NUMA systems. We have performed experiments with several popular sequence alignment tools on two widely available NUMA systems to assess the performance of different memory allocation policies and data partitioning strategies. We find that there is not one method that is best in all cases. However, we conclude that memory interleaving is the memory allocation strategy that provides the best performance when a large number of processors and memory banks are used. In the case of data partitioning, the best results are usually obtained when the number of partitions used is greater, sometimes combined with an interleave policy.  相似文献   

19.
We report data from an internet questionnaire of sixty number trivia. Participants were asked for the number of cups in their house, the number of cities they know and 58 other quantities. We compare the answers of familial sinistrals – individuals who are left-handed themselves or have a left-handed close blood-relative – with those of pure familial dextrals – right-handed individuals who reported only having right-handed close blood-relatives. We show that familial sinistrals use rounder numbers than pure familial dextrals in the survey responses. Round numbers in the decimal system are those that are multiples of powers of 10 or of half or a quarter of a power of 10. Roundness is a gradient concept, e.g. 100 is rounder than 50 or 200. We show that very round number like 100 and 1000 are used with 25% greater likelihood by familial sinistrals than by pure familial dextrals, while pure familial dextrals are more likely to use less round numbers such as 25, 60, and 200. We then use Sigurd’s (1988, Language in Society) index of the roundness of a number and report that familial sinistrals’ responses are significantly rounder on average than those of pure familial dextrals. To explain the difference, we propose that the cognitive effort of using exact numbers is greater for the familial sinistral group because their language and number systems tend to be more distributed over both hemispheres of the brain. Our data support the view that exact and approximate quantities are processed by two separate cognitive systems. Specifically, our behavioral data corroborates the view that the evolutionarily older, approximate number system is present in both hemispheres of the brain, while the exact number system tends to be localized in only one hemisphere.  相似文献   

20.
Biological systems by default involve complex components with complex relationships. To decipher how biological systems work, we assume that one needs to integrate information over multiple levels of complexity. The songbird vocal communication system is ideal for such integration due to many years of ethological investigation and a discreet dedicated brain network. Here we announce the beginnings of a songbird brain integrative project that involves high-throughput, molecular, anatomical, electrophysiological and behavioral levels of analysis. We first formed a rationale for inclusion of specific biological levels of analysis, then developed high-throughput molecular technologies on songbird brains, developed technologies for combined analysis of electrophysiological activity and gene regulation in awake behaving animals, and developed bioinformatic tools that predict causal interactions within and between biological levels of organization. This integrative brain project is fitting for the interdisciplinary approaches taken in the current songbird issue of the Journal of Comparative Physiology A and is expected to be conducive to deciphering how brains generate and perceive complex behaviors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号